September 2024

UK Introduces New Bill to Regulate Automated Decision-Making in the Public Sector

The UK is introducing the Public Authority Algorithmic and Automated Decision-Making Systems Bill (HL Bill 27) to regulate the use of automated and algorithmic tools in decision-making processes within public authorities. The proposed legislation applies to all algorithmic and automated decision-making systems developed or procured by public authorities, excluding those used for national security and routine calculations. The bill introduces requirements for algorithmic impact assessments, algorithmic transparency records, monitoring and auditing, employee training, and adherence to human rights and democratic values. The legislation aims to align with the Council of Europe's Framework Convention on AI.

The first internationally binding AI convention opens for signature

The Council of Europe's (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law has opened for signature, and has already been signed by several countries including the United States, the European Union, and the United Kingdom. The Convention applies primarily to States and certain international organizations. It establishes a comprehensive set of obligations designed to ensure that all activities throughout the AI system lifecycle align with human rights, democratic principles, and the rule of law. Its central aim is to ensure that AI systems are developed, deployed, and used in ways that respect human rights, democracy, and the rule of law. The Convention also sets out several guiding principles and a risk management framework for AI systems. The Convention is a pivotal step in the global governance of AI, expanding the geographic influence of such initiatives through the Council of Europe's broad reach.

August 2024

The conclusion of the first independent DSA audit period for VLOPs and VLOSEs

The Digital Services Act (DSA) is a set of rules designed to create a secure and trustworthy online environment in the European Union (EU). It imposes specific obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) with over 45 million monthly active users in the EU, including disclosing information, implementing complaint mechanisms, and undergoing annual independent audits. The first audit period closed on 25 August 2024, and VLOPs and VLOSEs must submit an audit report, describe how they will address any operational recommendations, and make the audit report publicly available within three months of receiving it. Resources are available for those wanting to learn more about the DSA.

FTC Final Rule Targets Fake Reviews and AI-Generated Testimonials

The Federal Trade Commission (FTC) has introduced a new final rule aimed at combatting deceptive practices involving consumer reviews and testimonials, particularly the misuse of AI in generating fake reviews. Key provisions of the rule include a ban on the creation and dissemination of fake reviews and testimonials, restrictions on compensated reviews and the disclosure of insider reviews. The rule also prohibits companies from suppressing negative reviews, misrepresenting review websites and combatting fake social media indicators. The aim is to restore trust in online feedback, addressing review manipulation and social media deception with a comprehensive approach to enhance transparency and accountability in digital commerce.

NIST releases finalized documents and draft guidance on AI

The US Department of Commerce has announced progress in AI safety, security, and trustworthiness, 270 days after President Biden's executive order on the Safe, Secure, and Trustworthy Development of AI. The National Institute of Standards and Technology has introduced multiple updates to support the objectives of the executive order, including the Generative AI Profile and Secure Software Development Practices for Generative AI and Dual Use Foundation Models. The department has also released an open-source software called Dioptra to evaluate the resilience of AI models against adversarial attacks. Furthermore, the plan for Global Engagement on AI Standards aims to foster international cooperation and the development of AI-related standards.