August 2024

FTC Final Rule Targets Fake Reviews and AI-Generated Testimonials

The Federal Trade Commission (FTC) has introduced a new final rule aimed at combatting deceptive practices involving consumer reviews and testimonials, particularly the misuse of AI in generating fake reviews. Key provisions of the rule include a ban on the creation and dissemination of fake reviews and testimonials, restrictions on compensated reviews and the disclosure of insider reviews. The rule also prohibits companies from suppressing negative reviews, misrepresenting review websites and combatting fake social media indicators. The aim is to restore trust in online feedback, addressing review manipulation and social media deception with a comprehensive approach to enhance transparency and accountability in digital commerce.

10 things you need to know about the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB1047, aims to regulate large-scale AI solution development and deployment. The bill establishes safety, security, and shutdown protocol requirements for developers of covered AI models, as well as operators of computing clusters capable of training such models. It defines "critical harm" as harm caused by a covered model or derivative that leads to mass casualties or $500 million of damage, and sets forth several prohibitions, including creating harmful models and using contracts to escape liability for harm caused by covered models. SB1047 also establishes the Board of Frontier Models, a five-person panel responsible for overseeing the Frontier Model Division, issuing guidance for AI safety practices, and setting best practices for AI development. Finally, the bill establishes CalCompute, a public cloud computing cluster that provides accessible computational resources for startups, researchers, and community groups to democratize access to advanced computing power.

NIST releases finalized documents and draft guidance on AI

The US Department of Commerce has announced progress in AI safety, security, and trustworthiness, 270 days after President Biden's executive order on the Safe, Secure, and Trustworthy Development of AI. The National Institute of Standards and Technology has introduced multiple updates to support the objectives of the executive order, including the Generative AI Profile and Secure Software Development Practices for Generative AI and Dual Use Foundation Models. The department has also released an open-source software called Dioptra to evaluate the resilience of AI models against adversarial attacks. Furthermore, the plan for Global Engagement on AI Standards aims to foster international cooperation and the development of AI-related standards.

July 2024

Senators Hinkenlooper and Capito introduce VET AI Act to create guidelines for AI Assurance

US Senators John Hickenlooper and Shelley Moore Capito have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act to establish guidelines for third-party AI audits. The bill requires the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for internal and external assurances of artificial intelligence systems. AI assurance is divided into two kinds: internal assurance and external assurance. The Director of NIST must develop voluntary guidelines for both kinds of AI assurance, addressing best practices, methodologies, procedures, and processes for assurance concerning consumer privacy, harm assessment and mitigation, dataset quality, documentation and communication, and governance and process controls.

June 2024

The Key AI Laws you need to know in Colorado

Colorado has introduced several laws aimed at regulating the use of AI, including SB169 which prohibits discrimination in insurance practices, SB205 which provides consumer protections against algorithmic discrimination resulting from AI, HB1147 which addresses the use of deepfake content in electoral communications, and SB22-113 which mandates governance of AI-driven facial recognition technologies by state entities. The laws impose transparency, governance, and risk management obligations on insurers, developers, and deployers, and introduce penalties for non-compliance. As proposals for AI laws continue to accelerate, mapping exposure risks and staying on top of regulatory developments is essential for ensuring compliance.