August 2024

10 things you need to know about the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB1047, aims to regulate large-scale AI solution development and deployment. The bill establishes safety, security, and shutdown protocol requirements for developers of covered AI models, as well as operators of computing clusters capable of training such models. It defines "critical harm" as harm caused by a covered model or derivative that leads to mass casualties or $500 million of damage, and sets forth several prohibitions, including creating harmful models and using contracts to escape liability for harm caused by covered models. SB1047 also establishes the Board of Frontier Models, a five-person panel responsible for overseeing the Frontier Model Division, issuing guidance for AI safety practices, and setting best practices for AI development. Finally, the bill establishes CalCompute, a public cloud computing cluster that provides accessible computational resources for startups, researchers, and community groups to democratize access to advanced computing power.

How the EU AI Act interacts with EU product safety legislation

The evolving technology landscape, particularly with the emergence of AI-driven products, poses new challenges to ensuring product safety. The EU already has a comprehensive product safety framework consisting of general cross-sectoral legislation and various Union harmonization legislation specific to products. The AI Act, which became effective on August 1, 2024, introduces specific requirements for AI systems and models, including those related to cybersecurity and human oversight, to ensure their safety. The AI Act complements the existing product safety laws of the EU, including the GPSR and UHL. AI and product safety relate to each other in terms of the potential risks and safety concerns that AI-enabled products may pose, requiring solid testing, robust security measures, and ongoing precautions to mitigate risks and build a safer technological ecosystem. The blog post details the scope and key safety aspects of the GPSR, explains the New Legislative Framework, outlines the mandatory requirements of the AI Act relevant to product safety, and examines how the product safety of HRAI systems will be ensured and surveilled.

NIST releases finalized documents and draft guidance on AI

The US Department of Commerce has announced progress in AI safety, security, and trustworthiness, 270 days after President Biden's executive order on the Safe, Secure, and Trustworthy Development of AI. The National Institute of Standards and Technology has introduced multiple updates to support the objectives of the executive order, including the Generative AI Profile and Secure Software Development Practices for Generative AI and Dual Use Foundation Models. The department has also released an open-source software called Dioptra to evaluate the resilience of AI models against adversarial attacks. Furthermore, the plan for Global Engagement on AI Standards aims to foster international cooperation and the development of AI-related standards.

July 2024

AI and Competition: How the EU AI Act will shape dynamics and enforcement

The EU's AI Act and Digital Markets Act have the potential to positively influence competition dynamics by promoting transparency, accountability, and fair competition. AI technologies can positively impact market competition by enhancing efficiency and innovation, providing deeper insights and personalization, disrupting markets, and facilitating optimized pricing strategies. However, AI can also negatively impact competition dynamics by increasing market concentration, algorithmic collusion, abuse of dominance, and erecting barriers to entry for smaller firms. The AI Act's transparency and risk assessment requirements for AI systems could help reduce concerns regarding market concentration, while its provisions on information sharing with competition authorities could bolster competition law enforcement. The Digital Markets Act's regulations regarding self-preferencing, data usage, and access rights for business users and third parties may prevent technology giants from gaining unfair competitive advantages from AI technologies. The EU is also investigating competition in virtual worlds and generative AI systems, which may supplement efforts to apply EU competition rules in AI-related contexts. The AI Act and other regulatory frameworks highlight the importance of prioritizing AI Act readiness to mitigate potential negative impacts of AI usage on competition dynamics.

Senators Hinkenlooper and Capito introduce VET AI Act to create guidelines for AI Assurance

US Senators John Hickenlooper and Shelley Moore Capito have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act to establish guidelines for third-party AI audits. The bill requires the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for internal and external assurances of artificial intelligence systems. AI assurance is divided into two kinds: internal assurance and external assurance. The Director of NIST must develop voluntary guidelines for both kinds of AI assurance, addressing best practices, methodologies, procedures, and processes for assurance concerning consumer privacy, harm assessment and mitigation, dataset quality, documentation and communication, and governance and process controls.