April 2024

Navigating the Nexus: The EU's Cybersecurity Framework and AI Act in Concert

The increasing intertwining of artificial intelligence (AI) systems with digital networks has led to an increase in cyber threats to these systems. With cyberattacks projected to cost around EUR 9 trillion in 2024, the European Union's forthcoming Artificial Intelligence Act (EU AI Act) aims to fortify AI systems and models with solid cybersecurity measures. The EU AI Act imposes mandatory cybersecurity requirements on high-risk AI systems and general-purpose AI models with systemic risk. Certification of high-risk AI systems under the Cybersecurity Act's voluntary certification schemes may provide the presumption of conformity with the cybersecurity requirements of the EU AI Act, reducing duplication of costs. The EU AI Act also relates to other cybersecurity legislation such as the Cyber Resilience Act and the Cybersecurity Act, reinforcing the EU's cybersecurity framework. The EU AI Act's cybersecurity provisions mandate that high-risk AI systems demonstrate resilience against unauthorized attempts by third parties to manipulate their usage, outputs, or performance by exploiting vulnerabilities in the system. GPAI models with systemic risk are considered capable of triggering additional risks compared to basic GPAI models. Cybersecurity vulnerabilities in these models may increase additional risks or enhance the possibility of harmful consequences. Therefore, providers of GPAI models with systemic risk are obligated to provide an adequate level of cybersecurity protection for the model and its physical infrastructure.

Elements of NIST’s AI Risk Management Framework: An Overview

The NIST AI RMF is a voluntary risk management framework mandated under the National Artificial intelligence Initiative Act of 2020. It is designed to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific. The framework is operationalised through a combination of five tools or elements, which include the NIST Core, AI RMF Playbook, Roadmap, Crosswalks, and Use-Case Profiles. The NIST Core provides the foundation for trustworthy AI systems, with four key functions, Govern, Map, Measure, and Manage, to guide organizations in development and deployment across various domains. The AI RMF Playbook offers actionable guidance for implementing the AI RMF's functions through detailed sub-actions. The AI RMF Roadmap outlines NIST's strategy for advancing the AI RMF, focusing on collaboration and key activities to maintain its relevance. The AI RMF Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other. Finally, the AI RMF Use-case profiles provide tailored implementations of the AI RMF's functions and actions, catering to various sectors and use-cases.

International Joint Guidance on Deploying AI Systems Securely

The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.

AI and Elections: Policy Makers Crack Down

The increase of AI technology in the election process has raised concerns about the potential use of misinformation and deepfakes to manipulate public opinion. Governments and tech companies have taken measures to prevent the spread of AI-generated content, including passing laws requiring disclaimers for AI-generated political advertisements and implementing guidelines for tech platforms to mitigate risks related to elections. However, the efficacy of these measures remains uncertain. Tech giants have also joined forces to combat AI-generated election disinformation, but their agreement lacks binding requirements. Clear disclosures and watermarking are potential safeguards in the ongoing struggle against AI-driven misinformation.

What you need to know about the proposed American Privacy Rights Act for AI

The American Privacy Rights Act (APRA) proposal, released by two bipartisan committee chairs, could lead to the first national data privacy framework in the US. It aims to solve the challenge caused by an absence of a national standard, and includes several consumer privacy provisions, restricts data collection and use, and creates a national registry of data brokers. The APRA does not specifically address AI, but its broad domain means it inadvertently covers AI systems that process personal data. Industry leaders have responded positively, but lawmakers are disappointed in the lack of protections for minors and may introduce complementary legislation. The bill has not yet been formally introduced, and Holistic AI can help maximise compliance with the new regulations.