January 2025

President Trump signs Executive Order on AI leadership

President Trump signed an executive order in January 2025 to remove barriers to American leadership in artificial intelligence (AI), emphasizing the need to solidify the US' position as a global leader in AI and to develop unbiased AI systems. To achieve its goals, the administration's AI policy vision will be developed and steps will be taken to de-implement President Biden's AI executive order. An AI action plan must be conceptualized and delivered, and the White House's Office of Budget and Management must publish a memorandum implementing the AI executive order by March 25th, 2025. The Trump administration also revoked President Biden's AI executive order and is reviewing federal agency actions taken under it to ensure consistency with the new policy. Despite the revocation of the Biden order, AI innovation and safety remain priorities for the US government.

Biden Issues Executive Order on Advancing U.S. Leadership in Artificial Intelligence Infrastructure

US President Joe Biden issued an Executive Order (EO) to speed up the development of AI infrastructure in the US, prioritizing economic competitiveness, national security, AI safety, clean energy, and maintaining low energy prices for consumers. Federal agencies are mandated to lease federal sites, accelerate clean energy deployment, prioritize AI infrastructure permits, develop transmission lines, interconnect AI infrastructure to the electric grid, ensure low electricity prices for consumers, and further allies and partners' AI infrastructure development. Private developers are required to fully fund construction, operation, and maintenance of the AI infrastructure, and uphold strong labor standards and procure domestically manufactured semiconductors used to power AI.

December 2024

Protecting worker wellbeing in the face of AI: the Department of Labor’s principles and best practices

The US Department of Labor has published a best practices roadmap for worker wellbeing, which outlines principles for workplaces that use AI. The eight principles focus on protecting worker rights and empowering employees, such as ensuring transparency in AI use, establishing AI governance and human oversight, and supporting workers impacted by AI. Although the principles and suggested actions are voluntary and not legally binding, they reflect enacted and proposed AI laws. Integrating these principles can reduce legal, financial, and reputational risks and increase trust in the use of AI in the workplace.

October 2024

Risk Management Profile for Artificial Intelligence and Human Rights

The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.

September 2024

California Governor Newsom vetoes Safe & Secure AI Innovation Bill (SB1047)

California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Models Act, or SB1047, which aimed to regulate the development and deployment of large-scale AI solutions in the state. The bill, which set strict safety standards for developers of AI models costing over $100m, and empowered the California Attorney General to hold them accountable for negligence that caused harm, was opposed by Big Tech. Newsom acknowledged the need for regulatory measures on AI development but criticised the bill for not being informed by "an empirical trajectory analysis of AI systems".