April 2024

Biden Executive Order 14110 – A 6-Month Update

In October 2023, President Biden signed the Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, which established AI safety and security standards, promoted consumer protection, enabled AI innovation, and advanced American technology leadership. Federal departments and agencies have successfully completed all tasks within 90 days, 150 days, and 180 days. These tasks included establishing disclosure requirements for developers of the most advanced AI systems, evaluating potential AI risks affecting critical infrastructure, preventing foreign entities from developing AI for malicious purposes, expanding investment into AI innovation and talent attraction, and creating an AI task force to expand AI innovation in healthcare. Further actions include the establishment of AI Governance Boards, submission of a report on authenticating, labeling, and detecting synthetic content, and identifying clinical errors arising from the use of AI in healthcare. The EO sets foundations for AI risk management, regulation, and innovation in the US.

International Joint Guidance on Deploying AI Systems Securely

The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.

What you need to know about the proposed American Privacy Rights Act for AI

The American Privacy Rights Act (APRA) proposal, released by two bipartisan committee chairs, could lead to the first national data privacy framework in the US. It aims to solve the challenge caused by an absence of a national standard, and includes several consumer privacy provisions, restricts data collection and use, and creates a national registry of data brokers. The APRA does not specifically address AI, but its broad domain means it inadvertently covers AI systems that process personal data. Industry leaders have responded positively, but lawmakers are disappointed in the lack of protections for minors and may introduce complementary legislation. The bill has not yet been formally introduced, and Holistic AI can help maximise compliance with the new regulations.

US Federal Agencies Release a Joint Statement on Automated Systems

Several US federal agencies, including the EEOC, Consumer Financial Protection Bureau, and the Federal Trade Commission, have issued a joint statement emphasizing their commitment to enforcing legal protections against discrimination and bias in automated systems and AI. The agencies also stress the applicability of existing laws to automated systems and encourage responsible innovation. The statement details how each agency has already enforced legal protections in relation to AI and automated systems, highlighting the importance of compliance with both existing laws and AI-specific laws. The statement warns of potential sources of unlawful discrimination from the use of automated systems, including training data, lack of transparency, and flawed assumptions about users and societal impact.

OMB Policy on Governance, Innovation, and Risk Management

The White House Office of Management and Budget (OMB) has issued a policy on advancing governance, innovation, and risk management for agency use of artificial intelligence (AI). The policy directs federal agencies to promote AI innovation and governance while managing the risks associated with the technology, especially those impacting the safety and rights of the American public. This aligns with President Biden’s executive order on AI and creates a balance between responsibility and innovation while making federal agencies accountable for the development and risk management of AI. The policy outlines several AI governance, innovation, and risk management actions for federal agencies, including managing risks related to the procurement of AI, addressing risks from the use of AI, expanding transparency of AI use, advancing responsible AI innovation, and growing the AI workforce. Alongside the policy, measures such as increasing reporting on the 2024 Federal AI Use Case Inventory and issuing a Request for Information (RFI) on the responsible procurement of AI in government have been introduced to promote responsible AI use.