April 2024

International Joint Guidance on Deploying AI Systems Securely

The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.

March 2024

The high cost of non-compliance: Penalties issued for AI under existing laws

Organizations are increasingly investing in AI tools and systems, but the risks associated with them can cause major harms if appropriate business practices and safeguards are not put in place. Many AI systems fall under the scope of existing laws and the cost of non-compliance can be very high if an organization is sanctioned. Compliance with existing and AI-specific laws is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws, with the majority of the penalties issued in the EU where authorities have cracked down on the processing of data by AI systems under the GDPR. The US has also taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use. China has also started to crack down on AI misuse, due to the recent enactment of multiple laws relating to AI. It is essential to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.

February 2024

Lost in Transl(A)t(I)on: Differing Definitions of AI

Regulating artificial intelligence (AI) has become urgent, with countries proposing legislation to ensure responsible and safe application of AI to minimize potential harm. However, there is a lack of consensus on how to define AI, which poses a challenge for regulatory efforts. This article surveys the definitions of AI across multiple regulatory initiatives, including the ICO, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, California’s proposed amendments, and more. While the definitions vary, they generally agree that AI systems have varying levels of autonomy, can have a variety of outputs, and require human involvement in defining objectives and providing input data.

December 2023

U.S. Department of Homeland Security and UK National Cyber Security Centre Guidelines on Secure AI

The US and UK have jointly published guidelines for secure AI system development in cybersecurity, which aim to reduce AI cybersecurity risks in terms of secure design, secure development, secure deployment, and secure operation and maintenance. The guidelines highlight the risk of adversarial machine learning, and focus specifically on systems that could result in significant physical or reputational damage or disrupt business operations or leak sensitive or confidential information. They encourage secure by design principles and promote the sharing of best practices. The joint publication is a step towards global cooperation, but it will take more than non-binding guidance to have a real global impact. Policymakers, lawmakers, and regulators are taking the risks of AI seriously, and AI risk management is now a competitive necessity.

November 2023

Do Existing Laws Apply to AI?

More than a third of companies use artificial intelligence (AI) in their business practices, with an additional 42% exploring how the technology can be utilised, but there are risks involved if appropriate safeguards are not implemented, according to a blog post by Holistic AI. The potential for AI to breach existing laws has attracted the attention of regulators worldwide, with the EU AI Act aimed at becoming the global standard for AI regulation. Existing laws are often applied to AI cases, including non-discrimination laws or for violating data protection laws, resulting in significant penalties.