October 2024

Risk Management Profile for Artificial Intelligence and Human Rights

The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.

September 2024

California Governor Newsom vetoes Safe & Secure AI Innovation Bill (SB1047)

California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Models Act, or SB1047, which aimed to regulate the development and deployment of large-scale AI solutions in the state. The bill, which set strict safety standards for developers of AI models costing over $100m, and empowered the California Attorney General to hold them accountable for negligence that caused harm, was opposed by Big Tech. Newsom acknowledged the need for regulatory measures on AI development but criticised the bill for not being informed by "an empirical trajectory analysis of AI systems".

California 2024 AI Legislation Roundup

California, home to 32 of the top 50 AI companies globally, has enacted several new laws and regulations related to AI, focusing on transparency, ethical use, and protecting individuals from harmful AI-generated content. Key legislative activity includes laws prohibiting non-consensual deepfake content, protecting against unauthorized sharing of intimate digital images, mandating clear disclosures for digitally manipulated political advertising, and imposing stricter requirements on collecting and sharing personal information. Governor Newsom is set to decide on pending bills mandating AI developer disclosures and setting safety standards for powerful AI systems. Controversial bills face significant opposition from industry leaders, who argue they could stifle innovation, while supporters believe they could serve as a model for future regulations. Other proposed laws regulate AI in healthcare, transportation, and communications, with specific provisions for autonomous vehicles, healthcare decision-making, and consumer privacy. Stakeholders across various industries are closely monitoring these developments as California continues to lead efforts in AI regulation.

August 2024

FTC Final Rule Targets Fake Reviews and AI-Generated Testimonials

The Federal Trade Commission (FTC) has introduced a new final rule aimed at combatting deceptive practices involving consumer reviews and testimonials, particularly the misuse of AI in generating fake reviews. Key provisions of the rule include a ban on the creation and dissemination of fake reviews and testimonials, restrictions on compensated reviews and the disclosure of insider reviews. The rule also prohibits companies from suppressing negative reviews, misrepresenting review websites and combatting fake social media indicators. The aim is to restore trust in online feedback, addressing review manipulation and social media deception with a comprehensive approach to enhance transparency and accountability in digital commerce.

10 things you need to know about the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB1047, aims to regulate large-scale AI solution development and deployment. The bill establishes safety, security, and shutdown protocol requirements for developers of covered AI models, as well as operators of computing clusters capable of training such models. It defines "critical harm" as harm caused by a covered model or derivative that leads to mass casualties or $500 million of damage, and sets forth several prohibitions, including creating harmful models and using contracts to escape liability for harm caused by covered models. SB1047 also establishes the Board of Frontier Models, a five-person panel responsible for overseeing the Frontier Model Division, issuing guidance for AI safety practices, and setting best practices for AI development. Finally, the bill establishes CalCompute, a public cloud computing cluster that provides accessible computational resources for startups, researchers, and community groups to democratize access to advanced computing power.