September 2023
The Financial Conduct Authority (FCA) is responsible for regulating over 50,000 financial services firms and markets in the UK, with a focus on promoting competition between providers and protecting customer interests. With the increasing adoption of AI and machine learning within the financial services sector, the FCA has collaborated with the Bank of England on initiatives to understand how AI is being used and how regulation can promote safe and responsible adoption. Key actions taken by the FCA include launching the Artificial Intelligence Public-Private Forum, publishing a discussion paper on safe and responsible AI adoption, and providing speeches on AI regulation and risk management. The FCA is currently taking a light-touch approach but emphasises the importance of algorithm auditing, governance frameworks, and risk management to promote safe adoption of AI.
The U.S. Senate Subcommittee on Privacy, Technology, and the Law held a hearing titled "Oversight of AI: Legislating on Artificial Intelligence" to discuss the need for regulation of AI. Senators Blumenthal and Hawley announced a bipartisan legislative framework to address five key areas: establishing a licensing regime, legal accountability for harms caused by AI, defending national security and international competition, promoting transparency, and protecting consumers and kids. The hearing also addressed the need for effective enforcement, international coordination, and protecting against election interference, surveillance, and job displacement. Compliance requirements for companies using AI are expected to evolve with the new AI regulations.
The Governor of California, Gavin Newsom, has issued an executive order on artificial intelligence (AI), outlining a strategic plan for the responsible design, development, integration, and management of emerging AI technologies. The order acknowledges the potential benefits and risks associated with generative AI tools and calls for a united governance approach to address these challenges. Among the requirements for state agencies are the submission of a report within 60 days of order issuance, detailing the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools, and a risk analysis of potential threats and vulnerabilities of California’s critical energy infrastructure related to generative AI by March 2024. The order also establishes guidelines for public sector procurement, sets up a pilot programme, and mandates training for state government workers' use of generative AI tools to achieve equitable outcomes by no later than July 2024.
The European Commission has designated Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft as "gatekeepers" under the Digital Markets Act (DMA). These companies operate 22 core platform services subject to new regulations promoting fair competition and consumer choice. Gatekeepers must conduct independent annual audits of their customer profiling methods and comply with rules relating to interactions with other businesses, consumers, advertisers, and publishers on their platforms. Failure to comply could result in fines and periodic penalty payments. The DMA will work in tandem with the AI Act and Digital Services Act.
The UK House of Commons Committee on Science, Innovation and Technology has published an interim report on the governance of artificial intelligence (AI), highlighting 12 key challenges to AI governance policymakers should keep in mind when developing AI frameworks. The report recommends that an AI bill should be introduced into Parliament in the coming months to support the UK’s aspirations of becoming an AI governance leader. The Committee also recognised that if an AI bill is not introduced before the general election, the UK could be left behind by the EU and US who have already made significant legislative progress towards regulating AI.