August 2023

Council of the European Union Publishes Key Priorities for the Second EU AI Act Trilogue

The Council of the European Union has released a summary of the provisional agreements, pending items, and future priorities discussed during the second trilogue on the EU AI Act. The act aims to set the global standard for AI regulation through a risk-based approach and requires certain obligations of providers, users, and deployers of high-risk AI systems. The legislative process is expected to finish by the end of 2023, and the final text is likely to be published shortly after. The EU AI Act will impact EU and non-EU companies operating AI systems within EU borders, and organizations can seek help from Holistic AI to manage their AI risks. This article is for informational purposes only and does not provide legal advice.

July 2023

Senators to Introduce Federal Bills to Regulate HR Tech in the United States

States in the US are introducing legislation to regulate HR Tech, with New York proposing legislation targeting automated employment decision tools (AEDTs), and California proposing multiple pieces of legislation. Federal efforts have now emerged, with Senators introducing the No Robot Bosses Act and Exploitative Workplace Surveillance and Technologies Task Force Act. The No Robot Bosses Act seeks to protect job applicants and employees from the undisclosed use of automated decision systems, requiring employers to provide notice of when and how the systems are used. The Exploitative Workplace Surveillance and Technologies Task Force Act seeks to create an interagency task force to lead a whole government study and report to Congress on workplace surveillance. The Biden-Harris administration have secured voluntary agreements from various AI companies to ensure products are safe and that public trust is built.

June 2023

EU AI Act Text Passed by Majority Vote ahead of Trilogues

The European Parliament has passed the latest version of the EU AI Act, which will now proceed to the final Trilogue stage. The Act is a landmark piece of legislation proposed by the European Commission to regulate AI systems available in the EU market. It takes a risk-based approach to regulation, with systems classified as posing minimal, limited, high, or unacceptable levels of risk. The latest version aligns more closely with the OECD definition of AI and covers eight high-risk categories, including biometrics and biometric-based systems, management of critical infrastructure, and AI systems intended to be used for influencing elections. The Act also prohibits real-time remote biometric identification and places a focus on protecting EU citizens' rights and education.

AI Disclosure Act of 2023: A Step Towards Algorithmic Transparency

The AI Disclosure Act of 2023 is a federal bill introduced by U.S. Representative Ritchie Torres of New York's 15th Congressional District that seeks to create greater transparency around the use of generative AI. The bill requires any outputs generated by artificial intelligence to be accompanied by a disclaimer indicating that it was generated by AI. Violating this requirement will result in penalties, privileges, and immunities under the Federal Trade Commission Act. The AI Disclosure Act is an important step towards algorithmic transparency, but it is not the first initiative to increase algorithmic transparency. Other initiatives include the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144, Maryland’s HB1202, and the EU AI Act. Organizations using AI should prepare for compliance with transparency requirements in advance to ensure compliance.

May 2023

European Parliamentary Committees Adopt AI Act Text and Set Date for Plenary Adoption

The EU AI Act is a piece of legislation proposed by the European Commission to regulate the AI systems available in the EU market. The Act takes a risk-based approach to regulation, classifying systems as posing minimal risk, limited risk, high risk, or unacceptable levels of risk. The December 2022 text defined AI as a machine-based system designed to operate with autonomy that can generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. The adopted text specifies eight broad high-risk applications of AI, with important updates to biometric and biometrics-based systems. A new Fundamental Rights Impact Assessment obligation has been introduced for users of High-Risk AIs, and prohibited practices now include AI models used for biometric categorization, predictive policing, and the collection of facial images for database construction. The EU AI Act will have important implications for the fairness and safety of AI systems available in the EU market, with deployers and users of AI systems facing a number of obligations.