January 2024

US Federal Artificial Intelligence Risk Management Act of 2024 introduced

The Federal Artificial Intelligence Risk Management Act of 2024 has been introduced in the US Congress, requiring federal agencies to comply with the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework, designed to help organizations prevent, detect, mitigate, and manage AI risks, sets out four key processes including mapping, measuring, managing, and governing. The Act also includes guidance for agencies on incorporating the AI RMF, reporting requirements, and regulations on AI acquisition. Compliance with NIST’s AI Risk Management Framework may soon become a legal requirement, as several states and federal laws already draw on it.

December 2023

ISO/IEC 42001:2023 – AI Standard on Establishing, Maintaining and Improving AI Management Systems Published

The International Standardisation Organisation (ISO) and the International Electrotechnical Commission (IEC) have published a voluntary standard, ISO/IEC 42001, that establishes a process for implementing AI Management Systems (AIMS) in organisations. ISO/IEC 42001 is the first process standard to outline a comprehensive governance framework for responsible and trustworthy deployment of AI systems. The standard is scalable and offers auditability, providing a foundation for external entities to certify and audit AI systems in line with the risk assessment framework. Policymakers behind the EU AI Act and legislative efforts on AI in the US may adopt the standard’s objectives and modalities to guide their regional standardisation efforts. Holistic AI, a regulatory compliance solutions provider, offers services to assist organisations in complying with applicable AI regulations and industry standards.

Ground Zero Unfolds: The Landmark Provisional Agreement on the EU AI Act

European co-legislators have agreed a provisional agreement on the EU AI Act which seeks to harmonise AI regulations across the EU. The agreement balances innovation with fundamental rights and safety and positions the EU as a leader in digital regulation. The EU AI Act includes prohibitions on biometric identification, updated requirements for high-risk AI systems, and a two-tiered approach for general purpose AI systems. It is expected to gradually enter into force with provisions on prohibitions coming into effect within six months and provisions on transparency and governance coming into effect after twelve months. The EU AI Act will likely set a global benchmark for the ethical and responsible design, development, and deployment of AI systems.

November 2023

California’s Privacy Protection Agency Releases Draft Rules for Automated Decision Technologies

California's Privacy Protection Agency has released draft regulations on the use of Automated Decision-making Technologies (ADTs), defining them as any system, software or process that processes personal information and uses computation to make or execute decisions or facilitate human decision-making. Under the proposed rules, consumers have the right to access information on the technologies employed and the methodologies by which decisions were developed, while businesses must disclose the usage of personal information in ADTs to consumers and provide opt-out modalities. The move is part of California's wider effort to regulate the use of AI within the State.

The UK’s AI Regulation Bill: A New Direction in AI Governance?

Lord Chris Holmes introduced The Artificial Intelligence (Regulation) Bill in the UK House of Lords, which mandates the establishment of a dedicated AI Authority responsible for enforcing the regulation. The Bill defines AI and sets up regulatory principles and sandboxes to support innovation. It also proposes the appointment of AI responsible officers and enforces transparency, IP obligations, and labelling. It remains uncertain whether the Bill will be adopted, but AI regulations are gaining momentum worldwide. It is crucial to prioritize the development of AI systems that promote ethical principles such as fairness and harm mitigation. Holistic AI offers expertise in AI governance, risk, and compliance.