February 2024

The EU AI Act reached a provisional agreement on 9 December 2023 and was unanimously endorsed by Coreper I on 2 February 2024, making it likely to be official once voted on by the European Parliament in April 2024. After adoption, there will be a two-year grace period for implementation and enforcement, during which the Commission will conduct the AI Pact to encourage early commitment to the Act's rules and principles. Companies should begin preparing for compliance with the Act to maximize alignment. Holistic AI offers governance, risk, and compliance platforms and innovative solutions to help companies navigate the Act's rules and requirements.
January 2024

The Australian government has published an interim response outlining their plans to regulate high-risk AI systems in the country. The response is guided by key principles including a risk-based approach, collaboration and transparency, and a community-centric approach. Specific measures proposed include mandatory guardrails, testing and transparency initiatives, an AI safety standard, and funding for AI initiatives to support adoption and development. The government aims to strike a balance between fostering innovation and protecting community interests, particularly privacy and security, while addressing potential harms caused by high-risk AI systems. The response reflects Australia's commitment to responsible AI practices and international cooperation.

The European Commission has announced the creation of the European Artificial Intelligence Office (AI Office), a key part of the forthcoming AI Act. The office will contribute to the implementation and enforcement of the act, and will sit within the Commission's DG CNECT department. The AI Office will be financed by the Digital Europe Programme. The EU is expected to promote early voluntary compliance with the AI Act through the Commission and the AI Office. The act is likely to come into force in the coming months.

The Federal Artificial Intelligence Risk Management Act of 2024 has been introduced in the US Congress, requiring federal agencies to comply with the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework, designed to help organizations prevent, detect, mitigate, and manage AI risks, sets out four key processes including mapping, measuring, managing, and governing. The Act also includes guidance for agencies on incorporating the AI RMF, reporting requirements, and regulations on AI acquisition. Compliance with NIST’s AI Risk Management Framework may soon become a legal requirement, as several states and federal laws already draw on it.
December 2023

The International Standardisation Organisation (ISO) and the International Electrotechnical Commission (IEC) have published a voluntary standard, ISO/IEC 42001, that establishes a process for implementing AI Management Systems (AIMS) in organisations. ISO/IEC 42001 is the first process standard to outline a comprehensive governance framework for responsible and trustworthy deployment of AI systems. The standard is scalable and offers auditability, providing a foundation for external entities to certify and audit AI systems in line with the risk assessment framework. Policymakers behind the EU AI Act and legislative efforts on AI in the US may adopt the standard’s objectives and modalities to guide their regional standardisation efforts. Holistic AI, a regulatory compliance solutions provider, offers services to assist organisations in complying with applicable AI regulations and industry standards.