February 2024

The Federal Trade Commission (FTC) has issued orders to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI to provide information about their investments and partnerships in generative AI companies, citing concerns about how these investments may distort innovation and undermine fair competition. The companies must provide information on agreements and related documents, interaction and influence, analyses and reports, documents related to exclusivity and access, materials provided to government entities, specifications for document production, use of technology, and contact information and communication. The FTC aims to better understand the competitive landscape and potential implications of AI collaborations to ensure fair competition and prevent practices that could stifle innovation.

09 Feb 2024
China's AI market is worth $23.196 billion USD in 2021 and is expected to triple to $61.855 billion by 2025. The government expects AI to create $154.638 billion USD in annual revenue by 2030. China has been introducing AI regulations since 2021, with three distinct regulatory measures enforced at the national, regional, and local levels. These regulations aim to regulate the proliferation of AI and its innovative use cases. They cover a wide scope, including deepfake technology, internet information service algorithmic recommendation management, and generative AI. The regulations seek to mitigate potential harms associated with AI, and they set a crucial precedent for other jurisdictions to follow.

The EU AI Act reached a provisional agreement on 9 December 2023 and was unanimously endorsed by Coreper I on 2 February 2024, making it likely to be official once voted on by the European Parliament in April 2024. After adoption, there will be a two-year grace period for implementation and enforcement, during which the Commission will conduct the AI Pact to encourage early commitment to the Act's rules and principles. Companies should begin preparing for compliance with the Act to maximize alignment. Holistic AI offers governance, risk, and compliance platforms and innovative solutions to help companies navigate the Act's rules and requirements.

The EU has set the gold standard for data protection regulation with the GDPR and is on its way to doing the same in the AI space with the AI Act. The Data Act, which is part of the European Data Strategy, governs connected products and related services' handling of data, including IoT devices, and requires full disclosure from companies on how they collect, store and share users' data. Data holders are bound to provide free, secure, and fair data access while safeguarding trade secrets and user confidentiality, affecting AI systems' deployment and functionality. The Data Act does not have specific provisions for AI systems, but it affects AI systems deployed in connection with connected products or related services. Compliance with the Data Act and the EU AI Act cannot automatically provide compliance with the other, but the requirements may affect each other. A holistic approach, using technical as well as regulatory tools concurrently, is needed to comply with both regulations.

Regulating artificial intelligence (AI) has become urgent, with countries proposing legislation to ensure responsible and safe application of AI to minimize potential harm. However, there is a lack of consensus on how to define AI, which poses a challenge for regulatory efforts. This article surveys the definitions of AI across multiple regulatory initiatives, including the ICO, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, California’s proposed amendments, and more. While the definitions vary, they generally agree that AI systems have varying levels of autonomy, can have a variety of outputs, and require human involvement in defining objectives and providing input data.