June 2023

Generative AI: A Regulatory Overview

Generative AI, which can create new outputs based on raw data, is seeing widespread use across various applications but there are concerns about its misuse and the harvesting of personal data without informed consent. Governments worldwide are accelerating efforts to understand and govern these models. The European Union is seeking to establish comprehensive regulatory governance through the AI Act, while the United States is exploring "earned trust" in AI systems and regulation of Generative AI is light-touch in India and the UK. China has issued draft rules to regulate Generative AI, requiring compliance with measures on data governance, bias mitigation, transparency, and content moderation. The key takeaway is that regulations are coming and it is crucial to prioritize the development of ethical AI systems that prioritize fairness and harm mitigation.

EU AI Act Text Passed by Majority Vote ahead of Trilogues

The European Parliament has passed the latest version of the EU AI Act, which will now proceed to the final Trilogue stage. The Act is a landmark piece of legislation proposed by the European Commission to regulate AI systems available in the EU market. It takes a risk-based approach to regulation, with systems classified as posing minimal, limited, high, or unacceptable levels of risk. The latest version aligns more closely with the OECD definition of AI and covers eight high-risk categories, including biometrics and biometric-based systems, management of critical infrastructure, and AI systems intended to be used for influencing elections. The Act also prohibits real-time remote biometric identification and places a focus on protecting EU citizens' rights and education.

Illinois Artificial Intelligence Video Interview Act - 5 Things You Need to Know

The Artificial Intelligence Video Interview Act requires employers in Illinois to inform job applicants if they will use AI to evaluate video interviews and disclose which characteristics will be used for the evaluation. The law also requires candidates to consent to AI use before the interview. Video interviews must only be shared with relevant parties, and applicants have the right to request their interview be deleted. Employers must report the race and ethnicity of applicants not selected for in-person interviews or hired after AI analysis. This law only applies to Illinois employers and is not intended to provide legal advice.

AI Disclosure Act of 2023: A Step Towards Algorithmic Transparency

The AI Disclosure Act of 2023 is a federal bill introduced by U.S. Representative Ritchie Torres of New York's 15th Congressional District that seeks to create greater transparency around the use of generative AI. The bill requires any outputs generated by artificial intelligence to be accompanied by a disclaimer indicating that it was generated by AI. Violating this requirement will result in penalties, privileges, and immunities under the Federal Trade Commission Act. The AI Disclosure Act is an important step towards algorithmic transparency, but it is not the first initiative to increase algorithmic transparency. Other initiatives include the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144, Maryland’s HB1202, and the EU AI Act. Organizations using AI should prepare for compliance with transparency requirements in advance to ensure compliance.

Requirements for ‘High-Risk' AI Applications: Comparing AI Regulations in the EU, US, and Canada

Artificial Intelligence (AI) has transformed numerous industries, but it also poses risks that require strong regulations to mitigate. Governments across the world are ramping up efforts to ensure AI's responsible development and deployment. This blog provides an in-depth overview of AI regulations in Europe, the United States, and Canada, focusing primarily on high-risk AI applications. The EU AI Act establishes a risk-based framework that prohibits AI systems with unacceptable risks and imposes stringent obligations on high-risk AI systems. The Algorithmic Accountability Act mandates companies to identify and resolve AI biases in the US, focusing on Automated Decision Systems (ADS) used for Critical Decisions (ACDPs). The Stop Discrimination by Algorithms Act prohibits the use of algorithms that make decisions based on protected characteristics in Washington DC. Assembly Bill 331 in California seeks to regulate automated decision tools that contribute to algorithmic discrimination. The Artificial Intelligence and Data Act in Canada aims to establish a risk-based regulatory approach for AI systems that may adversely affect human rights or pose risks of harm. The article concludes with the importance of early compliance for businesses and the role of Holistic AI in ensuring compliance with upcoming AI regulations.