August 2024
The Digital Services Act (DSA) is a set of rules designed to create a secure and trustworthy online environment in the European Union (EU). It imposes specific obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) with over 45 million monthly active users in the EU, including disclosing information, implementing complaint mechanisms, and undergoing annual independent audits. The first audit period closed on 25 August 2024, and VLOPs and VLOSEs must submit an audit report, describe how they will address any operational recommendations, and make the audit report publicly available within three months of receiving it. Resources are available for those wanting to learn more about the DSA.
The Federal Trade Commission (FTC) has introduced a new final rule aimed at combatting deceptive practices involving consumer reviews and testimonials, particularly the misuse of AI in generating fake reviews. Key provisions of the rule include a ban on the creation and dissemination of fake reviews and testimonials, restrictions on compensated reviews and the disclosure of insider reviews. The rule also prohibits companies from suppressing negative reviews, misrepresenting review websites and combatting fake social media indicators. The aim is to restore trust in online feedback, addressing review manipulation and social media deception with a comprehensive approach to enhance transparency and accountability in digital commerce.
The US Department of Commerce has announced progress in AI safety, security, and trustworthiness, 270 days after President Biden's executive order on the Safe, Secure, and Trustworthy Development of AI. The National Institute of Standards and Technology has introduced multiple updates to support the objectives of the executive order, including the Generative AI Profile and Secure Software Development Practices for Generative AI and Dual Use Foundation Models. The department has also released an open-source software called Dioptra to evaluate the resilience of AI models against adversarial attacks. Furthermore, the plan for Global Engagement on AI Standards aims to foster international cooperation and the development of AI-related standards.
July 2024
US Senators John Hickenlooper and Shelley Moore Capito have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act to establish guidelines for third-party AI audits. The bill requires the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for internal and external assurances of artificial intelligence systems. AI assurance is divided into two kinds: internal assurance and external assurance. The Director of NIST must develop voluntary guidelines for both kinds of AI assurance, addressing best practices, methodologies, procedures, and processes for assurance concerning consumer privacy, harm assessment and mitigation, dataset quality, documentation and communication, and governance and process controls.
Competition authorities from the UK, US, and EU have published a joint statement outlining potential risks to fair competition that can emerge from generative AI and the principles needed to support competition and innovation while protecting consumers. These principles include fair dealing, interoperability, and choice, with a focus on informing consumers about when and how AI is used in products and services. Agencies in the US and UK are cracking down on AI risks and becoming increasingly vocal about the need to ensure AI complies with existing laws and does not harm consumers.