December 2023

U.S. Department of Homeland Security and UK National Cyber Security Centre Guidelines on Secure AI

The US and UK have jointly published guidelines for secure AI system development in cybersecurity, which aim to reduce AI cybersecurity risks in terms of secure design, secure development, secure deployment, and secure operation and maintenance. The guidelines highlight the risk of adversarial machine learning, and focus specifically on systems that could result in significant physical or reputational damage or disrupt business operations or leak sensitive or confidential information. They encourage secure by design principles and promote the sharing of best practices. The joint publication is a step towards global cooperation, but it will take more than non-binding guidance to have a real global impact. Policymakers, lawmakers, and regulators are taking the risks of AI seriously, and AI risk management is now a competitive necessity.

November 2023

California’s Privacy Protection Agency Releases Draft Rules for Automated Decision Technologies

California's Privacy Protection Agency has released draft regulations on the use of Automated Decision-making Technologies (ADTs), defining them as any system, software or process that processes personal information and uses computation to make or execute decisions or facilitate human decision-making. Under the proposed rules, consumers have the right to access information on the technologies employed and the methodologies by which decisions were developed, while businesses must disclose the usage of personal information in ADTs to consumers and provide opt-out modalities. The move is part of California's wider effort to regulate the use of AI within the State.

Trilogue Tensions: Shaping the Future of AI Regulation in the EU

The EU's proposed regulation on harmonising rules for AI systems, known as the EU AI Act, is going through trilogue meetings between the Parliament, Commission and Council. With European elections approaching in June 2024 and increasing regulatory steps being taken by major AI powers like the US and China, pressure is mounting to adopt the EU AI Act into law by the end of 2023. However, controversies are emerging, particularly around regulation of foundation models. If critical issues remain unresolved, the most contested elements of the legislation may be left out of the final iteration and delegated to the proposed AI Office or future harmonised standardisation initiatives.

The UK’s AI Regulation Bill: A New Direction in AI Governance?

Lord Chris Holmes introduced The Artificial Intelligence (Regulation) Bill in the UK House of Lords, which mandates the establishment of a dedicated AI Authority responsible for enforcing the regulation. The Bill defines AI and sets up regulatory principles and sandboxes to support innovation. It also proposes the appointment of AI responsible officers and enforces transparency, IP obligations, and labelling. It remains uncertain whether the Bill will be adopted, but AI regulations are gaining momentum worldwide. It is crucial to prioritize the development of AI systems that promote ethical principles such as fairness and harm mitigation. Holistic AI offers expertise in AI governance, risk, and compliance.

Ethical AI Down Under: Australia’s AI Framework and Action Plan

Different countries have taken different approaches to promoting responsible AI, with the EU leading the way with its trio of laws targeting AI and algorithms. The UK has taken a light-touch approach through white papers, while the US has introduced laws at the state, federal, and local levels. China and Brazil have also passed multiple laws regulating AI. Australia’s AI Ethics Framework Discussion Paper proposes eight core principles for responsible AI, but the Australian government has yet to implement them into regulatory or legal requirements. Australia’s AI Action Plan outlines the government’s vision to position Australia as a global leader in responsible AI, but legal action has been taken against the government for failures of its automated debt recovery tool robodebt. Responsible AI is becoming increasingly important to avoid harm, minimize liability, and avoid reputational damage.