November 2023

California's Privacy Protection Agency has released draft regulations on the use of Automated Decision-making Technologies (ADTs), defining them as any system, software or process that processes personal information and uses computation to make or execute decisions or facilitate human decision-making. Under the proposed rules, consumers have the right to access information on the technologies employed and the methodologies by which decisions were developed, while businesses must disclose the usage of personal information in ADTs to consumers and provide opt-out modalities. The move is part of California's wider effort to regulate the use of AI within the State.

US President Joe Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on 30 October 2023, in a bid to promote responsible AI use and encourage innovation while avoiding bias, discrimination, and harm. The order defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments" and requires the National Institute of Standards and Technology to establish guidelines for trustworthy AI systems. The order also covers dual-use foundation models, Infrastructure as a Service products, synthetic content, equity and civil rights, and healthcare.
October 2023

On October 30th, 2023, the Biden Administration signed an Executive Order on artificial intelligence aimed at establishing safety and security standards to protect Americans' privacy and civil rights. The order sets strict standards for AI testing, requires agencies to establish standards for biological synthesis screening and cybersecurity programs, and directs AI safety for the military and intelligence. It also aims to protect the equal opportunity and non-discrimination rights of U.S. citizens, provide resources to shape the transformative potential of AI in education and healthcare, and address job displacement caused by AI. The order promotes innovation and competition by providing support for small developers and entrepreneurs and expanding visas for skilled immigrants. The order also directs actions for international cooperation towards safe, secure, and trustworthy AI. The order requires AI companies to disclose their safety testing procedures and results to the U.S. Federal government. The Executive Order follows other recent efforts towards responsible AI by the Biden Administration.

Legislation and regulation are increasingly recognized as important to promote safety, fairness, and ethics in the use of AI tools. While the US has made progress on vertical legislation targeting specific use cases, Europe has made strides in horizontal legislation targeting multiple use cases at once, with the EU AI Act seeking to become the global gold standard for AI regulation. The Algorithmic Accountability Act, introduced for the third time in the US, targets automated decision systems used in augmented critical decision processes and applies to entities over which the Federal Trade Commission has jurisdiction. Covered entities must conduct ongoing algorithmic impact assessments and submit annual summary reports to the FTC. The US is determined to impose more conditions on the use of algorithms and AI, with enterprises needing to navigate an influx of rules.

New York City Mayor Eric Adams and Chief Technology Officer Matthew Fraser have released an AI Action Plan for responsible AI in city government, which defines AI as an umbrella term for a range of technologies that make predictions, inferences, recommendations, rankings, or other decisions with data. The plan sets out seven key initiatives, including designing a governance framework, educating and empowering the public, and supporting AI implementation within city agencies while ensuring due diligence in procurement. The publication of the plan follows previous efforts in the city to manage the risks of AI, such as developing a framework for identifying algorithmic tools and enforcing independent bias audits of automated employment decision tools.