September 2023
Over 110 AI, data governance, and civil rights academics have signed an appeal calling for all AI systems under the EU AI Act to undergo a Fundamental Rights Impact Assessment (FRIA). The appeal, from the Brussels Privacy Hub, aims to mitigate the risks associated with AI deployment through comprehensive ex-ante safeguards. Passed by the European Parliament on 14 June 2023, the final version of the EU AI Act is expected to be finalised by the end of the year, with penalties of up to €40 million or 7% of global turnover for non-compliance. The appeal also calls for the development of criteria for evaluating the impact of AI on fundamental rights, transparency on FRIA results, and participation and involvement of end-users and independent public authorities in the impact assessment process, among other provisions.
The U.S. Senate Subcommittee on Privacy, Technology, and the Law held a hearing titled "Oversight of AI: Legislating on Artificial Intelligence" to discuss the need for regulation of AI. Senators Blumenthal and Hawley announced a bipartisan legislative framework to address five key areas: establishing a licensing regime, legal accountability for harms caused by AI, defending national security and international competition, promoting transparency, and protecting consumers and kids. The hearing also addressed the need for effective enforcement, international coordination, and protecting against election interference, surveillance, and job displacement. Compliance requirements for companies using AI are expected to evolve with the new AI regulations.
The Governor of California, Gavin Newsom, has issued an executive order on artificial intelligence (AI), outlining a strategic plan for the responsible design, development, integration, and management of emerging AI technologies. The order acknowledges the potential benefits and risks associated with generative AI tools and calls for a united governance approach to address these challenges. Among the requirements for state agencies are the submission of a report within 60 days of order issuance, detailing the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools, and a risk analysis of potential threats and vulnerabilities of California’s critical energy infrastructure related to generative AI by March 2024. The order also establishes guidelines for public sector procurement, sets up a pilot programme, and mandates training for state government workers' use of generative AI tools to achieve equitable outcomes by no later than July 2024.
The European Commission has designated Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft as "gatekeepers" under the Digital Markets Act (DMA). These companies operate 22 core platform services subject to new regulations promoting fair competition and consumer choice. Gatekeepers must conduct independent annual audits of their customer profiling methods and comply with rules relating to interactions with other businesses, consumers, advertisers, and publishers on their platforms. Failure to comply could result in fines and periodic penalty payments. The DMA will work in tandem with the AI Act and Digital Services Act.
The UK House of Commons Committee on Science, Innovation and Technology has published an interim report on the governance of artificial intelligence (AI), highlighting 12 key challenges to AI governance policymakers should keep in mind when developing AI frameworks. The report recommends that an AI bill should be introduced into Parliament in the coming months to support the UK’s aspirations of becoming an AI governance leader. The Committee also recognised that if an AI bill is not introduced before the general election, the UK could be left behind by the EU and US who have already made significant legislative progress towards regulating AI.