February 2024

The European Union's upcoming Artificial Intelligence Act (EU AI Act) will impact the integration of AI systems into autonomous vehicles, particularly in terms of safety, and classifies them according to their risk level. AI serves as the technological backbone for the automation capabilities in AVs, and the EU AI Act recognizes the risks that these systems pose if they fail. AV-related AI systems may be classified as high-risk AI systems under the EU AI Act, but they will primarily be governed by their sectoral legislation and not the standard requirement or obligation regime for high-risk AI systems. The EU AI Act also amends the sectoral legislation applicable to AVs and requires that the implementation acts to be adopted under that legislation take the high-risk AI system requirements into account. There are two primary regulations applicable to AVs in the EU: the Type-Approval Framework Regulation (TAFR) and the General Safety Regulation (GSR). Many AV-related AI systems will be classified as HRAIs under the EU AI Act due to the reference to the Union harmonization legislation, but the ordinary HRAIS regime will not apply to these systems.

The Federal Artificial Intelligence Environmental Impacts Act of 2024 has been introduced by Senator Markey and co-sponsored by several senators to address concerns regarding the sustainability and environmental impact of AI technologies. The bill defines AI as a machine-based system that can make decisions and recommendations influencing real or virtual environments and distinguishes it from an AI model. The bill has four key provisions, including a study on the environmental impacts of AI, the establishment of a consortium of stakeholders, a system for voluntary reporting, and a report to Congress on findings surrounding the environmental impact of AI. The bill emphasizes transparency and accountability throughout the process and aims to promote responsible innovation in the AI sector to achieve a greener and more resilient future.

Derek Mobley has filed an amended complaint in the ongoing class action lawsuit against Workday, alleging that the firm's algorithm-based applicant screening tools discriminate against individuals based on race, age and disability. Mobley seeks certification of the case as a class action, designating himself as the representative of parties facing alleged discriminatory screening practices, and is seeking a preliminary and permanent injunction against Workday to restrain the company from engaging in the alleged unlawful activities. He is also requesting an order mandating Workday to institute and execute policies that provide equal employment opportunities for all minorities as well as monetary compensation in terms of back pay, front pay and other financial relief.

The Association of Southeast Asian Nations (ASEAN) has released a guide on AI governance and ethics to empower organizations and governments in Southeast Asia to navigate the landscape of artificial intelligence responsibly. The guide outlines three fundamental objectives towards responsible AI development, including practical guidance on designing, developing, and deploying AI systems, promoting consistency and collaboration in AI governance efforts between member states, and empowering policymakers to shape regulatory frameworks that promote responsible AI practices. The guide provides seven guiding principles serving as foundational pillars for responsible AI governance, including transparency and explainability, fairness and equity, and privacy and data governance. It recommends the Establishment of an ASEAN Working Group on AI Governance, nurturing AI talent, promoting investment in AI startups, investing in AI research and development, promoting adoption of AI governance tools, and raising awareness among citizens. The guide emphasizes the need for collaborative efforts between governments, private sectors, and educational institutions to foster ethical AI development and adoption within the region.

The EU has proposed the Harmonised Rules on Artificial Intelligence (EU AI Act) to lead the world in AI regulation and build trust in AI systems. The Act sets out a risk-based approach for AI systems, defines three levels of risk, and subjects certain AI systems to transparency obligations. The Act also introduces a three-tiered model of penalties for violators, with the heftiest fines imposed on those who violate the prohibition of specific AI systems. The EU AI Act has gone through an extended consultation process and has been subject to amendments throughout. Penalties can be issued to providers, deployers, importers, distributors, and notified bodies. The EU AI Act emphasizes proportionality and offers lower penalties for SMEs and startups. There is no union-wide central authority for imposing fines on AI operators, so penalties depend on the national legal system of Member States. The fines for providers of GPAI models and Union bodies are imposed by the Commission and the European Data Protection Supervisor, respectively.