August 2024

How effective is watermarking for AI-generated content?

Regulators and policymakers are facing challenges posed by AI-generated content, such as deepfakes creating non-consensual imagery and bots spreading disinformation. To differentiate between synthetic and human-generated content, various approaches are being developed, including AI watermarking, content provenance, retrieval-based detectors, and post-hoc detectors. AI watermarking, in particular, has gained attention, but it lacks standardization and raises privacy concerns. Different jurisdictions are tackling the issue differently, with the USA mandating watermarks on AI-generated material and the EU imposing mandatory disclosures, while China and Singapore require prominent marking and technical solutions like watermarking. Holistic AI offers technical assessments that can help organizations stay ahead of regulatory changes.

FTC Final Rule Targets Fake Reviews and AI-Generated Testimonials

The Federal Trade Commission (FTC) has introduced a new final rule aimed at combatting deceptive practices involving consumer reviews and testimonials, particularly the misuse of AI in generating fake reviews. Key provisions of the rule include a ban on the creation and dissemination of fake reviews and testimonials, restrictions on compensated reviews and the disclosure of insider reviews. The rule also prohibits companies from suppressing negative reviews, misrepresenting review websites and combatting fake social media indicators. The aim is to restore trust in online feedback, addressing review manipulation and social media deception with a comprehensive approach to enhance transparency and accountability in digital commerce.

Experimenting before marketing: Regulatory sandboxes under the EU AI Act

The EU AI Act seeks to strike a balance between regulating the risks associated with artificial intelligence (AI) technologies and promoting innovation. The Act introduces regulatory sandboxes to allow providers to experiment with AI systems before marketing. The sandboxes will be established by the national authorities in physical, digital, or hybrid forms and involve testing AI systems in real-world conditions. SMEs and startups are prioritized for participation in the sandboxes. Providers must observe the conditions and requirements of the sandbox plan and are liable for harm inflicted on third parties as a result of sandbox activities. The sandboxes are planned to become fully operational and effective within 24 months. Compliance with the Act is required for all relevant AI systems in the marketing phase, and early preparation for compliance is vital.

10 things you need to know about the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as SB1047, aims to regulate large-scale AI solution development and deployment. The bill establishes safety, security, and shutdown protocol requirements for developers of covered AI models, as well as operators of computing clusters capable of training such models. It defines "critical harm" as harm caused by a covered model or derivative that leads to mass casualties or $500 million of damage, and sets forth several prohibitions, including creating harmful models and using contracts to escape liability for harm caused by covered models. SB1047 also establishes the Board of Frontier Models, a five-person panel responsible for overseeing the Frontier Model Division, issuing guidance for AI safety practices, and setting best practices for AI development. Finally, the bill establishes CalCompute, a public cloud computing cluster that provides accessible computational resources for startups, researchers, and community groups to democratize access to advanced computing power.

How the EU AI Act interacts with EU product safety legislation

The evolving technology landscape, particularly with the emergence of AI-driven products, poses new challenges to ensuring product safety. The EU already has a comprehensive product safety framework consisting of general cross-sectoral legislation and various Union harmonization legislation specific to products. The AI Act, which became effective on August 1, 2024, introduces specific requirements for AI systems and models, including those related to cybersecurity and human oversight, to ensure their safety. The AI Act complements the existing product safety laws of the EU, including the GPSR and UHL. AI and product safety relate to each other in terms of the potential risks and safety concerns that AI-enabled products may pose, requiring solid testing, robust security measures, and ongoing precautions to mitigate risks and build a safer technological ecosystem. The blog post details the scope and key safety aspects of the GPSR, explains the New Legislative Framework, outlines the mandatory requirements of the AI Act relevant to product safety, and examines how the product safety of HRAI systems will be ensured and surveilled.