July 2024

International competition authorities publish a joint statement on competition in generative AI

Competition authorities from the UK, US, and EU have published a joint statement outlining potential risks to fair competition that can emerge from generative AI and the principles needed to support competition and innovation while protecting consumers. These principles include fair dealing, interoperability, and choice, with a focus on informing consumers about when and how AI is used in products and services. Agencies in the US and UK are cracking down on AI risks and becoming increasingly vocal about the need to ensure AI complies with existing laws and does not harm consumers.

June 2024

The AI Landscape in 2024: A 6-Month Update

Recent legal developments have highlighted critical questions regarding fairness, accountability, and the protection of rights in the digital age, specifically in relation to AI. This has led to an explosion of laws and regulations being introduced globally to tackle the challenges posed by AI. The blog highlights key developments in the US, including legal actions in AI, navigating the era of deepfakes, and recent AI US legislation and regulations across sectors. The blog also covers advancements in AI governance in Europe and the Asia-Pacific region, emphasizing a shared commitment to harnessing AI's potential while managing associated risks.

Advancing Global Collaboration in AI Governance: Insights from the AI Seoul Summit 2024

The Republic of Korea and the United Kingdom hosted the AI Seoul Summit in May 2024, building on the momentum of the UK's AI Safety Summit in 2023. The summit brought together global leaders from government, industry, academia, and civil society to explore cooperation on AI safety, innovation, and inclusivity. The summit resulted in the Seoul Declaration, Seoul Statement of Intent, and Seoul Ministerial Statement, which emphasized the importance of international collaboration and cooperation on AI safety. The summit also led to the establishment of new AI safety institutes and Frontier AI Security Commitments signed by 16 global AI technology companies. A third AI summit is scheduled for Paris in early 2025.

May 2024

Towards International Cooperation on Responsible AI

Governments around the world are increasingly coming together to address challenges posed by the development of AI systems. Discussions on safety, security, trustworthiness, and responsible development are at the forefront of these efforts. Notable international developments include the US and UK partnership to develop tests for advanced AI models, the United Nations General Assembly's adoption of its first landmark resolution on AI, the UK and Republic of Korea's upcoming AI Seoul Summit, joint international guidance on the secure deployment of AI systems, updates to the EU-US Terminology and Taxonomy for AI, and the US and China's agreement to develop a framework for the responsible development of AI. The growing importance of compliance in AI governance is also emphasized.

April 2024

International Joint Guidance on Deploying AI Systems Securely

The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.