June 2023
The European Parliament has voted to move forward with the EU AI Act, which seeks to lead the world in AI regulation and create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights. The act will have implications for providers, deployers and distributors of AI systems used in the EU. The act takes a risk-based approach to regulating AI, where obligations are proportional to the risk posed by a system based on four risk categories. The act seeks to set the global standard for AI regulation, affecting entities around the world that operate in the EU or interact with the EU market. Businesses must use the preparatory period to build up their readiness and establish robust governance structures, build internal competencies, and implement requisite technologies. Holistic AI can assist organisations to achieve compliance with the EU AI Act through its comprehensive suite of solutions.
California has proposed amendments to their employment regulations to extend non-discrimination practices to automated-decision systems (ADS) to address bias and discrimination in hiring. Employers with five or more employees are subject to the regulation, and vendors acting on behalf of an employer are considered an employer under the regulation. ADSs are defined as computational processes that screen, evaluate, categorize, recommend, or make or facilitate employment-related decisions, with restrictions on using ADSs to screen out applicants based on protected characteristics. The legislation also prohibits the use of medical or psychological exams, including by using an ADS, before an offer is extended to an applicant. Characteristics protected under the proposed amendments include race, national origin, gender, and age, unless they are shown to be job-related for the position in question. The most recent updates to the regulation extend record retention requirements, and companies are encouraged to conduct audits of their automated-decision systems to identify bias and reduce harm and legal risks.
Generative AI, which can create new outputs based on raw data, is seeing widespread use across various applications but there are concerns about its misuse and the harvesting of personal data without informed consent. Governments worldwide are accelerating efforts to understand and govern these models. The European Union is seeking to establish comprehensive regulatory governance through the AI Act, while the United States is exploring "earned trust" in AI systems and regulation of Generative AI is light-touch in India and the UK. China has issued draft rules to regulate Generative AI, requiring compliance with measures on data governance, bias mitigation, transparency, and content moderation. The key takeaway is that regulations are coming and it is crucial to prioritize the development of ethical AI systems that prioritize fairness and harm mitigation.
The European Parliament has passed the latest version of the EU AI Act, which will now proceed to the final Trilogue stage. The Act is a landmark piece of legislation proposed by the European Commission to regulate AI systems available in the EU market. It takes a risk-based approach to regulation, with systems classified as posing minimal, limited, high, or unacceptable levels of risk. The latest version aligns more closely with the OECD definition of AI and covers eight high-risk categories, including biometrics and biometric-based systems, management of critical infrastructure, and AI systems intended to be used for influencing elections. The Act also prohibits real-time remote biometric identification and places a focus on protecting EU citizens' rights and education.
The Artificial Intelligence Video Interview Act requires employers in Illinois to inform job applicants if they will use AI to evaluate video interviews and disclose which characteristics will be used for the evaluation. The law also requires candidates to consent to AI use before the interview. Video interviews must only be shared with relevant parties, and applicants have the right to request their interview be deleted. Employers must report the race and ethnicity of applicants not selected for in-person interviews or hired after AI analysis. This law only applies to Illinois employers and is not intended to provide legal advice.