August 2024
The new UK Labour government, led by Prime Minister Keir Starmer, plans to regulate powerful AI models, although it has not released any specific bill yet. Existing UK laws like the UK GDPR and Equality Act 2010 affect AI use, and the Labour Party's approach contrasts with the previous government's pro-innovation stance, aiming for binding regulations and transparency. The new AI Opportunities Action Plan and DSIT’s expanded role will support AI-driven growth and public service improvements, and the UK public remains concerned about AI’s impact on the labor market. The Digital Information and Smart Data Bill, the AI Opportunities Action Plan, and the DSIT restructure are some potential regulatory pathways for future regulation. Compliance with Holistic AI can act as an effective guardrail for organizations amidst uncertainties and a rapidly evolving AI regulatory ecosystem.
The Digital Services Act (DSA) is a set of rules designed to create a secure and trustworthy online environment in the European Union (EU). It imposes specific obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) with over 45 million monthly active users in the EU, including disclosing information, implementing complaint mechanisms, and undergoing annual independent audits. The first audit period closed on 25 August 2024, and VLOPs and VLOSEs must submit an audit report, describe how they will address any operational recommendations, and make the audit report publicly available within three months of receiving it. Resources are available for those wanting to learn more about the DSA.
Regulators and policymakers are facing challenges posed by AI-generated content, such as deepfakes creating non-consensual imagery and bots spreading disinformation. To differentiate between synthetic and human-generated content, various approaches are being developed, including AI watermarking, content provenance, retrieval-based detectors, and post-hoc detectors. AI watermarking, in particular, has gained attention, but it lacks standardization and raises privacy concerns. Different jurisdictions are tackling the issue differently, with the USA mandating watermarks on AI-generated material and the EU imposing mandatory disclosures, while China and Singapore require prominent marking and technical solutions like watermarking. Holistic AI offers technical assessments that can help organizations stay ahead of regulatory changes.
The Federal Trade Commission (FTC) has introduced a new final rule aimed at combatting deceptive practices involving consumer reviews and testimonials, particularly the misuse of AI in generating fake reviews. Key provisions of the rule include a ban on the creation and dissemination of fake reviews and testimonials, restrictions on compensated reviews and the disclosure of insider reviews. The rule also prohibits companies from suppressing negative reviews, misrepresenting review websites and combatting fake social media indicators. The aim is to restore trust in online feedback, addressing review manipulation and social media deception with a comprehensive approach to enhance transparency and accountability in digital commerce.
The EU AI Act seeks to strike a balance between regulating the risks associated with artificial intelligence (AI) technologies and promoting innovation. The Act introduces regulatory sandboxes to allow providers to experiment with AI systems before marketing. The sandboxes will be established by the national authorities in physical, digital, or hybrid forms and involve testing AI systems in real-world conditions. SMEs and startups are prioritized for participation in the sandboxes. Providers must observe the conditions and requirements of the sandbox plan and are liable for harm inflicted on third parties as a result of sandbox activities. The sandboxes are planned to become fully operational and effective within 24 months. Compliance with the Act is required for all relevant AI systems in the marketing phase, and early preparation for compliance is vital.