February 2024

Making Sense of China’s AI Regulations

China's artificial intelligence (AI) market is expected to triple to $61.855 billion by 2025, with the Chinese government expecting AI to generate $154.638 billion USD in annual revenue by 2030. The country has been leading the way in AI regulation, enforcing three distinct regulatory measures on national, regional, and local levels. These include provisions to strengthen supervision over "deep fake" technologies, regulations for AI-generated content like personalized recommendations on mobile apps, and measures for the safe development and application of generative AI. While there are concerns about potential suppression of free speech, these laws set an important precedent and best practices. China's AI regulations, focused on balancing power and mitigating harms, could influence AI regulation globally.

Approaching the tape: The EU’s final strides in the AI legislation marathon

The EU AI Act reached a provisional agreement on 9 December 2023 and was unanimously endorsed by Coreper I on 2 February 2024, making it likely to be official once voted on by the European Parliament in April 2024. After adoption, there will be a two-year grace period for implementation and enforcement, during which the Commission will conduct the AI Pact to encourage early commitment to the Act's rules and principles. Companies should begin preparing for compliance with the Act to maximize alignment. Holistic AI offers governance, risk, and compliance platforms and innovative solutions to help companies navigate the Act's rules and requirements.

EU Data Act: Empowering Users Through Informed Data Governance

The EU has set the gold standard for data protection regulation with the GDPR and is on its way to doing the same in the AI space with the AI Act. The Data Act, which is part of the European Data Strategy, governs connected products and related services' handling of data, including IoT devices, and requires full disclosure from companies on how they collect, store and share users' data. Data holders are bound to provide free, secure, and fair data access while safeguarding trade secrets and user confidentiality, affecting AI systems' deployment and functionality. The Data Act does not have specific provisions for AI systems, but it affects AI systems deployed in connection with connected products or related services. Compliance with the Data Act and the EU AI Act cannot automatically provide compliance with the other, but the requirements may affect each other. A holistic approach, using technical as well as regulatory tools concurrently, is needed to comply with both regulations.

Lost in Transl(A)t(I)on: Differing Definitions of AI

Regulating artificial intelligence (AI) has become urgent, with countries proposing legislation to ensure responsible and safe application of AI to minimize potential harm. However, there is a lack of consensus on how to define AI, which poses a challenge for regulatory efforts. This article surveys the definitions of AI across multiple regulatory initiatives, including the ICO, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, California’s proposed amendments, and more. While the definitions vary, they generally agree that AI systems have varying levels of autonomy, can have a variety of outputs, and require human involvement in defining objectives and providing input data.

January 2024

Safe and responsible AI in Australia: An overview of the Government's Interim Response

The Australian government has published an interim response outlining their plans to regulate high-risk AI systems in the country. The response is guided by key principles including a risk-based approach, collaboration and transparency, and a community-centric approach. Specific measures proposed include mandatory guardrails, testing and transparency initiatives, an AI safety standard, and funding for AI initiatives to support adoption and development. The government aims to strike a balance between fostering innovation and protecting community interests, particularly privacy and security, while addressing potential harms caused by high-risk AI systems. The response reflects Australia's commitment to responsible AI practices and international cooperation.