August 2024

How the EU AI Act interacts with EU product safety legislation

The evolving technology landscape, particularly with the emergence of AI-driven products, poses new challenges to ensuring product safety. The EU already has a comprehensive product safety framework consisting of general cross-sectoral legislation and various Union harmonization legislation specific to products. The AI Act, which became effective on August 1, 2024, introduces specific requirements for AI systems and models, including those related to cybersecurity and human oversight, to ensure their safety. The AI Act complements the existing product safety laws of the EU, including the GPSR and UHL. AI and product safety relate to each other in terms of the potential risks and safety concerns that AI-enabled products may pose, requiring solid testing, robust security measures, and ongoing precautions to mitigate risks and build a safer technological ecosystem. The blog post details the scope and key safety aspects of the GPSR, explains the New Legislative Framework, outlines the mandatory requirements of the AI Act relevant to product safety, and examines how the product safety of HRAI systems will be ensured and surveilled.

NIST releases finalized documents and draft guidance on AI

The US Department of Commerce has announced progress in AI safety, security, and trustworthiness, 270 days after President Biden's executive order on the Safe, Secure, and Trustworthy Development of AI. The National Institute of Standards and Technology has introduced multiple updates to support the objectives of the executive order, including the Generative AI Profile and Secure Software Development Practices for Generative AI and Dual Use Foundation Models. The department has also released an open-source software called Dioptra to evaluate the resilience of AI models against adversarial attacks. Furthermore, the plan for Global Engagement on AI Standards aims to foster international cooperation and the development of AI-related standards.

July 2024

AI and Competition: How the EU AI Act will shape dynamics and enforcement

The EU's AI Act and Digital Markets Act have the potential to positively influence competition dynamics by promoting transparency, accountability, and fair competition. AI technologies can positively impact market competition by enhancing efficiency and innovation, providing deeper insights and personalization, disrupting markets, and facilitating optimized pricing strategies. However, AI can also negatively impact competition dynamics by increasing market concentration, algorithmic collusion, abuse of dominance, and erecting barriers to entry for smaller firms. The AI Act's transparency and risk assessment requirements for AI systems could help reduce concerns regarding market concentration, while its provisions on information sharing with competition authorities could bolster competition law enforcement. The Digital Markets Act's regulations regarding self-preferencing, data usage, and access rights for business users and third parties may prevent technology giants from gaining unfair competitive advantages from AI technologies. The EU is also investigating competition in virtual worlds and generative AI systems, which may supplement efforts to apply EU competition rules in AI-related contexts. The AI Act and other regulatory frameworks highlight the importance of prioritizing AI Act readiness to mitigate potential negative impacts of AI usage on competition dynamics.

Senators Hinkenlooper and Capito introduce VET AI Act to create guidelines for AI Assurance

US Senators John Hickenlooper and Shelley Moore Capito have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act to establish guidelines for third-party AI audits. The bill requires the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for internal and external assurances of artificial intelligence systems. AI assurance is divided into two kinds: internal assurance and external assurance. The Director of NIST must develop voluntary guidelines for both kinds of AI assurance, addressing best practices, methodologies, procedures, and processes for assurance concerning consumer privacy, harm assessment and mitigation, dataset quality, documentation and communication, and governance and process controls.

International competition authorities publish a joint statement on competition in generative AI

Competition authorities from the UK, US, and EU have published a joint statement outlining potential risks to fair competition that can emerge from generative AI and the principles needed to support competition and innovation while protecting consumers. These principles include fair dealing, interoperability, and choice, with a focus on informing consumers about when and how AI is used in products and services. Agencies in the US and UK are cracking down on AI risks and becoming increasingly vocal about the need to ensure AI complies with existing laws and does not harm consumers.