September 2023

The UK House of Commons Committee on Science, Innovation and Technology has published an interim report on the governance of artificial intelligence (AI), highlighting 12 key challenges to AI governance policymakers should keep in mind when developing AI frameworks. The report recommends that an AI bill should be introduced into Parliament in the coming months to support the UK’s aspirations of becoming an AI governance leader. The Committee also recognised that if an AI bill is not introduced before the general election, the UK could be left behind by the EU and US who have already made significant legislative progress towards regulating AI.
August 2023

Spain has established a new regulatory body, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which will oversee the country's National Artificial Intelligence Strategy and ensure AI development aligns with principles of inclusivity, sustainability, and welfare. AESIA is also expected to enforce the EU's landmark AI Act, which each EU member state must establish a supervisory authority to support the implementation and application of the legislation. The establishment of AESIA comes as part of Spain's Digital Spain 2025 Agenda, a €600 million initiative aiming to shape the country's digital future.

The Council of the European Union has released a summary of the provisional agreements, pending items, and future priorities discussed during the second trilogue on the EU AI Act. The act aims to set the global standard for AI regulation through a risk-based approach and requires certain obligations of providers, users, and deployers of high-risk AI systems. The legislative process is expected to finish by the end of 2023, and the final text is likely to be published shortly after. The EU AI Act will impact EU and non-EU companies operating AI systems within EU borders, and organizations can seek help from Holistic AI to manage their AI risks. This article is for informational purposes only and does not provide legal advice.
July 2023

11 Jul 2023
The EU AI Act is proposed legislation aimed at creating a global standard for protecting users of AI systems from preventable harm. The Act outlines a risk-based approach, establishing obligations for AI systems based on their level of risk. High-risk systems are subject to more stringent requirements, including continuous risk management, data governance practices, technical documentation, and transparency provisions. The Act also prohibits certain practices deemed to pose too high of a risk, such as the use of subliminal techniques or exploitative practices. Non-compliance with the regulation can result in steep penalties of up to €40 million or 7% of global turnover, whichever is higher. The Act will have far-reaching implications and affect entities that interact with the EU market, even if they are based outside of the EU. The enforcement date of the EU AI Act is dependent on several stages of the EU legislative process.

Despite emerging regulation on artificial intelligence (AI) around the world, the UK government has yet to propose any AI-specific regulation. Instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. These include an AI auditing framework, guidance on explaining decisions made with AI, guidance on AI and data protection, a national data strategy, and a national AI strategy. The latest developments include a pro-innovation approach to regulating AI, the publication of an AI action plan, and the launch of the Centre for Data Ethics and Innovation’s portfolio of AI assurance techniques. The UK government's proposals aim to cement the UK's role as an AI superpower over the next 10 years by investing in infrastructure and education, and adopting a dynamic and adaptable approach to regulation.