September 2023

EU AI Act: Leading Academics Call for Strengthened Fundamental Rights Impact Assessment and Audit Mechanisms

Over 110 AI, data governance, and civil rights academics have signed an appeal calling for all AI systems under the EU AI Act to undergo a Fundamental Rights Impact Assessment (FRIA). The appeal, from the Brussels Privacy Hub, aims to mitigate the risks associated with AI deployment through comprehensive ex-ante safeguards. Passed by the European Parliament on 14 June 2023, the final version of the EU AI Act is expected to be finalised by the end of the year, with penalties of up to €40 million or 7% of global turnover for non-compliance. The appeal also calls for the development of criteria for evaluating the impact of AI on fundamental rights, transparency on FRIA results, and participation and involvement of end-users and independent public authorities in the impact assessment process, among other provisions.

How is Switzerland planning to regulate AI?

Switzerland is actively monitoring AI regulatory developments in the EU and has its own guidelines for AI adoption in the federal administration. However, the country has yet to propose binding regulations for AI use. The Federal Council has called for increased efforts to converge with global initiatives and close the gap between Switzerland's approach and other international approaches to AI regulation. Switzerland is investing in AI research and development initiatives to position itself as a leader in the field. Holistic AI can help navigate global AI regulations. This article is for informational purposes only and does not provide legal advice.

What is the FCA’s Approach to AI Regulation?

The Financial Conduct Authority (FCA) is responsible for regulating over 50,000 financial services firms and markets in the UK, with a focus on promoting competition between providers and protecting customer interests. With the increasing adoption of AI and machine learning within the financial services sector, the FCA has collaborated with the Bank of England on initiatives to understand how AI is being used and how regulation can promote safe and responsible adoption. Key actions taken by the FCA include launching the Artificial Intelligence Public-Private Forum, publishing a discussion paper on safe and responsible AI adoption, and providing speeches on AI regulation and risk management. The FCA is currently taking a light-touch approach but emphasises the importance of algorithm auditing, governance frameworks, and risk management to promote safe adoption of AI.

Senate Subcommittee Leaders Hold Hearing on Proposed Bipartisan Framework for U.S. AI Act

The U.S. Senate Subcommittee on Privacy, Technology, and the Law held a hearing titled "Oversight of AI: Legislating on Artificial Intelligence" to discuss the need for regulation of AI. Senators Blumenthal and Hawley announced a bipartisan legislative framework to address five key areas: establishing a licensing regime, legal accountability for harms caused by AI, defending national security and international competition, promoting transparency, and protecting consumers and kids. The hearing also addressed the need for effective enforcement, international coordination, and protecting against election interference, surveillance, and job displacement. Compliance requirements for companies using AI are expected to evolve with the new AI regulations.

California Governor Gavin Newsom Issues an Executive Order on AI

The Governor of California, Gavin Newsom, has issued an executive order on artificial intelligence (AI), outlining a strategic plan for the responsible design, development, integration, and management of emerging AI technologies. The order acknowledges the potential benefits and risks associated with generative AI tools and calls for a united governance approach to address these challenges. Among the requirements for state agencies are the submission of a report within 60 days of order issuance, detailing the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools, and a risk analysis of potential threats and vulnerabilities of California’s critical energy infrastructure related to generative AI by March 2024. The order also establishes guidelines for public sector procurement, sets up a pilot programme, and mandates training for state government workers' use of generative AI tools to achieve equitable outcomes by no later than July 2024.