February 2024

FCC Declares AI-Generated Robocalls Illegal: A Step Towards Consumer Protection

The Federal Communications Commission (FCC) has issued a Declaratory Ruling that clarifies the application of the Telephone Consumer Protection Act of 1991 (TCPA) to artificial intelligence (AI) technologies, particularly those capable of generating human-like voices. The ruling confirms that the TCPA's restrictions on the use of "artificial or pre-recorded voice" encompass current AI technologies that simulate human voices or generate call content using pre-recorded voices. The regulation grants the FCC authority to impose fines on companies utilizing AI voices in their calls or to halt the service providers facilitating them, with fines potentially exceeding $23,000 per call. The ruling also allows call recipients to pursue legal action and potentially receive compensation of up to $1,500 for each unwanted call. The FCC's decision follows a Notice of Inquiry recognizing the need to address AI's potential to emulate human speech and pose new challenges to regulatory frameworks.

Driving Innovation: Navigating the EU AI Act's Impact on Autonomous Vehicles

The European Union's upcoming Artificial Intelligence Act (EU AI Act) will impact the integration of AI systems into autonomous vehicles, particularly in terms of safety, and classifies them according to their risk level. AI serves as the technological backbone for the automation capabilities in AVs, and the EU AI Act recognizes the risks that these systems pose if they fail. AV-related AI systems may be classified as high-risk AI systems under the EU AI Act, but they will primarily be governed by their sectoral legislation and not the standard requirement or obligation regime for high-risk AI systems. The EU AI Act also amends the sectoral legislation applicable to AVs and requires that the implementation acts to be adopted under that legislation take the high-risk AI system requirements into account. There are two primary regulations applicable to AVs in the EU: the Type-Approval Framework Regulation (TAFR) and the General Safety Regulation (GSR). Many AV-related AI systems will be classified as HRAIs under the EU AI Act due to the reference to the Union harmonization legislation, but the ordinary HRAIS regime will not apply to these systems.

Artificial Intelligence Environmental Impacts Act 2024

The Federal Artificial Intelligence Environmental Impacts Act of 2024 has been introduced by Senator Markey and co-sponsored by several senators to address concerns regarding the sustainability and environmental impact of AI technologies. The bill defines AI as a machine-based system that can make decisions and recommendations influencing real or virtual environments and distinguishes it from an AI model. The bill has four key provisions, including a study on the environmental impacts of AI, the establishment of a consortium of stakeholders, a system for voluntary reporting, and a report to Congress on findings surrounding the environmental impact of AI. The bill emphasizes transparency and accountability throughout the process and aims to promote responsible innovation in the AI sector to achieve a greener and more resilient future.

Mobley v. Workday: Amended Complaint highlights Discrimination and AI Bias in Employment Screening

Derek Mobley has filed an amended complaint in the ongoing class action lawsuit against Workday, alleging that the firm's algorithm-based applicant screening tools discriminate against individuals based on race, age and disability. Mobley seeks certification of the case as a class action, designating himself as the representative of parties facing alleged discriminatory screening practices, and is seeking a preliminary and permanent injunction against Workday to restrain the company from engaging in the alleged unlawful activities. He is also requesting an order mandating Workday to institute and execute policies that provide equal employment opportunities for all minorities as well as monetary compensation in terms of back pay, front pay and other financial relief.

The ASEAN Guide on AI Governance and Ethics: What you need to know

The Association of Southeast Asian Nations (ASEAN) has released a guide on AI governance and ethics to empower organizations and governments in Southeast Asia to navigate the landscape of artificial intelligence responsibly. The guide outlines three fundamental objectives towards responsible AI development, including practical guidance on designing, developing, and deploying AI systems, promoting consistency and collaboration in AI governance efforts between member states, and empowering policymakers to shape regulatory frameworks that promote responsible AI practices. The guide provides seven guiding principles serving as foundational pillars for responsible AI governance, including transparency and explainability, fairness and equity, and privacy and data governance. It recommends the Establishment of an ASEAN Working Group on AI Governance, nurturing AI talent, promoting investment in AI startups, investing in AI research and development, promoting adoption of AI governance tools, and raising awareness among citizens. The guide emphasizes the need for collaborative efforts between governments, private sectors, and educational institutions to foster ethical AI development and adoption within the region.