April 2024
The EU AI Act introduces a requirement for deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) to identify potential threats to individuals’ fundamental rights and implement adequate responsive actions against such threats. FRIAs must be conducted before the deployment of HRAI systems, and must include essential details such as the risks of harm that could affect individuals or groups potentially affected by the system's use. The EU AI Act defines fundamental rights according to the Charter of Fundamental Rights of the European Union and introduces a new impact assessment called Fundamental Rights Impact Assessment (“FRIA”) for AI systems to further strengthen the protection of EU citizens’ fundamental rights. FRIAs must be revised if pertinent factors, such as risks of harm or usage frequency, undergo changes according to the deployer's evaluation.
The increasing intertwining of artificial intelligence (AI) systems with digital networks has led to an increase in cyber threats to these systems. With cyberattacks projected to cost around EUR 9 trillion in 2024, the European Union's forthcoming Artificial Intelligence Act (EU AI Act) aims to fortify AI systems and models with solid cybersecurity measures. The EU AI Act imposes mandatory cybersecurity requirements on high-risk AI systems and general-purpose AI models with systemic risk. Certification of high-risk AI systems under the Cybersecurity Act's voluntary certification schemes may provide the presumption of conformity with the cybersecurity requirements of the EU AI Act, reducing duplication of costs. The EU AI Act also relates to other cybersecurity legislation such as the Cyber Resilience Act and the Cybersecurity Act, reinforcing the EU's cybersecurity framework. The EU AI Act's cybersecurity provisions mandate that high-risk AI systems demonstrate resilience against unauthorized attempts by third parties to manipulate their usage, outputs, or performance by exploiting vulnerabilities in the system. GPAI models with systemic risk are considered capable of triggering additional risks compared to basic GPAI models. Cybersecurity vulnerabilities in these models may increase additional risks or enhance the possibility of harmful consequences. Therefore, providers of GPAI models with systemic risk are obligated to provide an adequate level of cybersecurity protection for the model and its physical infrastructure.
The NIST AI RMF is a voluntary risk management framework mandated under the National Artificial intelligence Initiative Act of 2020. It is designed to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific. The framework is operationalised through a combination of five tools or elements, which include the NIST Core, AI RMF Playbook, Roadmap, Crosswalks, and Use-Case Profiles. The NIST Core provides the foundation for trustworthy AI systems, with four key functions, Govern, Map, Measure, and Manage, to guide organizations in development and deployment across various domains. The AI RMF Playbook offers actionable guidance for implementing the AI RMF's functions through detailed sub-actions. The AI RMF Roadmap outlines NIST's strategy for advancing the AI RMF, focusing on collaboration and key activities to maintain its relevance. The AI RMF Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other. Finally, the AI RMF Use-case profiles provide tailored implementations of the AI RMF's functions and actions, catering to various sectors and use-cases.
The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.
The increase of AI technology in the election process has raised concerns about the potential use of misinformation and deepfakes to manipulate public opinion. Governments and tech companies have taken measures to prevent the spread of AI-generated content, including passing laws requiring disclaimers for AI-generated political advertisements and implementing guidelines for tech platforms to mitigate risks related to elections. However, the efficacy of these measures remains uncertain. Tech giants have also joined forces to combat AI-generated election disinformation, but their agreement lacks binding requirements. Clear disclosures and watermarking are potential safeguards in the ongoing struggle against AI-driven misinformation.