September 2024

The Council of Europe's (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law has opened for signature, and has already been signed by several countries including the United States, the European Union, and the United Kingdom. The Convention applies primarily to States and certain international organizations. It establishes a comprehensive set of obligations designed to ensure that all activities throughout the AI system lifecycle align with human rights, democratic principles, and the rule of law. Its central aim is to ensure that AI systems are developed, deployed, and used in ways that respect human rights, democracy, and the rule of law. The Convention also sets out several guiding principles and a risk management framework for AI systems. The Convention is a pivotal step in the global governance of AI, expanding the geographic influence of such initiatives through the Council of Europe's broad reach.
August 2024

The new UK Labour government, led by Prime Minister Keir Starmer, plans to regulate powerful AI models, although it has not released any specific bill yet. Existing UK laws like the UK GDPR and Equality Act 2010 affect AI use, and the Labour Party's approach contrasts with the previous government's pro-innovation stance, aiming for binding regulations and transparency. The new AI Opportunities Action Plan and DSIT’s expanded role will support AI-driven growth and public service improvements, and the UK public remains concerned about AI’s impact on the labor market. The Digital Information and Smart Data Bill, the AI Opportunities Action Plan, and the DSIT restructure are some potential regulatory pathways for future regulation. Compliance with Holistic AI can act as an effective guardrail for organizations amidst uncertainties and a rapidly evolving AI regulatory ecosystem.

The Digital Services Act (DSA) is a set of rules designed to create a secure and trustworthy online environment in the European Union (EU). It imposes specific obligations on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) with over 45 million monthly active users in the EU, including disclosing information, implementing complaint mechanisms, and undergoing annual independent audits. The first audit period closed on 25 August 2024, and VLOPs and VLOSEs must submit an audit report, describe how they will address any operational recommendations, and make the audit report publicly available within three months of receiving it. Resources are available for those wanting to learn more about the DSA.

The EU AI Act seeks to strike a balance between regulating the risks associated with artificial intelligence (AI) technologies and promoting innovation. The Act introduces regulatory sandboxes to allow providers to experiment with AI systems before marketing. The sandboxes will be established by the national authorities in physical, digital, or hybrid forms and involve testing AI systems in real-world conditions. SMEs and startups are prioritized for participation in the sandboxes. Providers must observe the conditions and requirements of the sandbox plan and are liable for harm inflicted on third parties as a result of sandbox activities. The sandboxes are planned to become fully operational and effective within 24 months. Compliance with the Act is required for all relevant AI systems in the marketing phase, and early preparation for compliance is vital.

The evolving technology landscape, particularly with the emergence of AI-driven products, poses new challenges to ensuring product safety. The EU already has a comprehensive product safety framework consisting of general cross-sectoral legislation and various Union harmonization legislation specific to products. The AI Act, which became effective on August 1, 2024, introduces specific requirements for AI systems and models, including those related to cybersecurity and human oversight, to ensure their safety. The AI Act complements the existing product safety laws of the EU, including the GPSR and UHL. AI and product safety relate to each other in terms of the potential risks and safety concerns that AI-enabled products may pose, requiring solid testing, robust security measures, and ongoing precautions to mitigate risks and build a safer technological ecosystem. The blog post details the scope and key safety aspects of the GPSR, explains the New Legislative Framework, outlines the mandatory requirements of the AI Act relevant to product safety, and examines how the product safety of HRAI systems will be ensured and surveilled.