November 2023

G7 Countries Release International Guiding Principles and a Code of Conduct on Governing Advanced AI Systems

The G7 nations have unveiled International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI Developers, with 11 actionable guidelines for organisations involved in developing advanced foundational models. The guidelines include taking appropriate measures to identify and mitigate risks across the AI lifecycle, publicly reporting AI systems’ capabilities and limitations, and prioritising research to mitigate societal, safety, and security risks. The development is particularly relevant given the urgency among policymakers worldwide to chart regulatory pathways to govern AI responsibly, highlighted by several initiatives, including the Biden-Harris Administration’s Executive Order on AI and the establishment of the United Nations’ High Level Advisory Body.

October 2023

Biden Administration Signs Executive Order on AI

On October 30th, 2023, the Biden Administration signed an Executive Order on artificial intelligence aimed at establishing safety and security standards to protect Americans' privacy and civil rights. The order sets strict standards for AI testing, requires agencies to establish standards for biological synthesis screening and cybersecurity programs, and directs AI safety for the military and intelligence. It also aims to protect the equal opportunity and non-discrimination rights of U.S. citizens, provide resources to shape the transformative potential of AI in education and healthcare, and address job displacement caused by AI. The order promotes innovation and competition by providing support for small developers and entrepreneurs and expanding visas for skilled immigrants. The order also directs actions for international cooperation towards safe, secure, and trustworthy AI. The order requires AI companies to disclose their safety testing procedures and results to the U.S. Federal government. The Executive Order follows other recent efforts towards responsible AI by the Biden Administration.

Digital Services Act: European Commission Publishes Final Delegated Regulation on Conducting Independent Audits

On October 20, 2023, the European Commission published its final version of the Delegated Regulation on conducting Independent Audits for Very large Online Platforms (VLOPs) and Search Engines (VLOSEs) under the Digital Services Act (DSA). The rules seek to provide guidance to audited providers and auditors on the audit process, reporting templates, and procedural details. Holistic AI, a leader in AI Assurance and Algorithm Auditing, offers independent annual audits and other compliance services to covered entities under the DSA. The company provides customized solutions to assist businesses in complying with the regulation and offers a Final Audit Report with operational recommendations and risk analysis.

New York City Publishes AI Action Plan

New York City Mayor Eric Adams and Chief Technology Officer Matthew Fraser have released an AI Action Plan for responsible AI in city government, which defines AI as an umbrella term for a range of technologies that make predictions, inferences, recommendations, rankings, or other decisions with data. The plan sets out seven key initiatives, including designing a governance framework, educating and empowering the public, and supporting AI implementation within city agencies while ensuring due diligence in procurement. The publication of the plan follows previous efforts in the city to manage the risks of AI, such as developing a framework for identifying algorithmic tools and enforcing independent bias audits of automated employment decision tools.

September 2023

ISO and IEC Make Foundational Standard on Artificial Intelligence Publicly Available

The International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) have released a new AI standard, ISO/IEC 22989, to the public. The standard aims to provide a shared vocabulary and framework for essential AI concepts, promoting dialogue between stakeholders and paving the way for technical standards focused on responsible AI development and deployment. The standard defines more than 110 key concepts in the field of AI and clarifies the importance of multi-stakeholder consultations and the need for standardisation due to the lack of global alignment and consensus on crucial issues. Holistic AI can assist organisations in operationalising technical standards at scale to ensure AI systems are developed and deployed safely, effectively, and in line with compliance obligations.