October 2023
Legislation and regulation are increasingly recognized as important to promote safety, fairness, and ethics in the use of AI tools. While the US has made progress on vertical legislation targeting specific use cases, Europe has made strides in horizontal legislation targeting multiple use cases at once, with the EU AI Act seeking to become the global gold standard for AI regulation. The Algorithmic Accountability Act, introduced for the third time in the US, targets automated decision systems used in augmented critical decision processes and applies to entities over which the Federal Trade Commission has jurisdiction. Covered entities must conduct ongoing algorithmic impact assessments and submit annual summary reports to the FTC. The US is determined to impose more conditions on the use of algorithms and AI, with enterprises needing to navigate an influx of rules.
New York City Mayor Eric Adams and Chief Technology Officer Matthew Fraser have released an AI Action Plan for responsible AI in city government, which defines AI as an umbrella term for a range of technologies that make predictions, inferences, recommendations, rankings, or other decisions with data. The plan sets out seven key initiatives, including designing a governance framework, educating and empowering the public, and supporting AI implementation within city agencies while ensuring due diligence in procurement. The publication of the plan follows previous efforts in the city to manage the risks of AI, such as developing a framework for identifying algorithmic tools and enforcing independent bias audits of automated employment decision tools.
02 Oct 2023
California is among the states leading in proposing laws to regulate AI at the local, state, and federal levels. While California does not yet have a dedicated Artificial Intelligence Act, multiple initiatives have been proposed. California Assembly Bill 302 requires the Department of Technology to establish the inventory of all high-risk automated decision systems being used by any state agency and submit an annual report to the designated committees. Senate Bill 313 seeks to regulate the use of AI by state agencies and establish the Office of Artificial Intelligence within the Department of Technology. Assembly Bill 331 aims to prohibit algorithmic discrimination through automated decision tools and impose obligations on developers and deployers to conduct an impact assessment and establish a governance program to manage risks. Senate Concurrent Resolution 17 affirms California Legislature's commitment to the White House's Blueprint for an AI Bill of Rights. It is unclear whether California will have an AI Act, but policymakers are recognizing the need for AI legislation and targeting multiple critical applications.
September 2023
22 Sep 2023
The Federal Trade Commission (FTC), which is responsible for protecting consumers and enforcing civil antitrust law in the US, is becoming increasingly vocal about the regulation of AI and the risks it poses to consumers and competition. The FTC has published warnings about biased AI, issued guidance on AI and algorithms, and highlighted the applicability of existing laws to AI, including section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act. The FTC has emphasized the need for AI transparency, accountability, and independent standards, as well as the importance of ensuring training data for algorithms is representative and high-quality. The FTC has also highlighted the role of regulators in preventing a small number of firms from dominating the market and hindering competition. The FTC is investigating OpenAI for consumer protection concerns related to its chatbot, ChatGTP, which made false claims about individuals and may have leaked data. Ongoing compliance and risk management are essential to protect against preventable harms and ensure that AI is an asset, not a liability.
The Governor of California, Gavin Newsom, has issued an executive order on artificial intelligence (AI), outlining a strategic plan for the responsible design, development, integration, and management of emerging AI technologies. The order acknowledges the potential benefits and risks associated with generative AI tools and calls for a united governance approach to address these challenges. Among the requirements for state agencies are the submission of a report within 60 days of order issuance, detailing the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools, and a risk analysis of potential threats and vulnerabilities of California’s critical energy infrastructure related to generative AI by March 2024. The order also establishes guidelines for public sector procurement, sets up a pilot programme, and mandates training for state government workers' use of generative AI tools to achieve equitable outcomes by no later than July 2024.