April 2024
The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.
The increase of AI technology in the election process has raised concerns about the potential use of misinformation and deepfakes to manipulate public opinion. Governments and tech companies have taken measures to prevent the spread of AI-generated content, including passing laws requiring disclaimers for AI-generated political advertisements and implementing guidelines for tech platforms to mitigate risks related to elections. However, the efficacy of these measures remains uncertain. Tech giants have also joined forces to combat AI-generated election disinformation, but their agreement lacks binding requirements. Clear disclosures and watermarking are potential safeguards in the ongoing struggle against AI-driven misinformation.
The American Privacy Rights Act (APRA) proposal, released by two bipartisan committee chairs, could lead to the first national data privacy framework in the US. It aims to solve the challenge caused by an absence of a national standard, and includes several consumer privacy provisions, restricts data collection and use, and creates a national registry of data brokers. The APRA does not specifically address AI, but its broad domain means it inadvertently covers AI systems that process personal data. Industry leaders have responded positively, but lawmakers are disappointed in the lack of protections for minors and may introduce complementary legislation. The bill has not yet been formally introduced, and Holistic AI can help maximise compliance with the new regulations.
Several US federal agencies, including the EEOC, Consumer Financial Protection Bureau, and the Federal Trade Commission, have issued a joint statement emphasizing their commitment to enforcing legal protections against discrimination and bias in automated systems and AI. The agencies also stress the applicability of existing laws to automated systems and encourage responsible innovation. The statement details how each agency has already enforced legal protections in relation to AI and automated systems, highlighting the importance of compliance with both existing laws and AI-specific laws. The statement warns of potential sources of unlawful discrimination from the use of automated systems, including training data, lack of transparency, and flawed assumptions about users and societal impact.
The EU's Artificial Intelligence Act (AI Act) introduces a framework for categorizing AI systems as either low-risk, high-risk, or prohibited. The AI Act prohibits AI systems that violate human dignity, freedom, equality, and privacy. Eight key AI practices are prohibited by the EU AI Act, including those that involve subliminal, manipulative, or deceptive AI techniques, exploitative systems that significantly distort behavior, AI systems used for classification or scoring of people based on behavior or personality characteristics, predictive policing based solely on AI profiling, AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, and AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings. Non-compliance with these prohibitions can result in significant administrative fines of up to €35,000,000 or up to 7% of an offender's global annual turnover. The rules on prohibited practices will be the first to start applying six months after the Act's publication in the Official Journal. The implications of using a prohibited AI system under the EU AI Act include hefty penalties.