September 2023

How is the FTC Regulating AI?

The Federal Trade Commission (FTC), which is responsible for protecting consumers and enforcing civil antitrust law in the US, is becoming increasingly vocal about the regulation of AI and the risks it poses to consumers and competition. The FTC has published warnings about biased AI, issued guidance on AI and algorithms, and highlighted the applicability of existing laws to AI, including section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act. The FTC has emphasized the need for AI transparency, accountability, and independent standards, as well as the importance of ensuring training data for algorithms is representative and high-quality. The FTC has also highlighted the role of regulators in preventing a small number of firms from dominating the market and hindering competition. The FTC is investigating OpenAI for consumer protection concerns related to its chatbot, ChatGTP, which made false claims about individuals and may have leaked data. Ongoing compliance and risk management are essential to protect against preventable harms and ensure that AI is an asset, not a liability.

California Governor Gavin Newsom Issues an Executive Order on AI

The Governor of California, Gavin Newsom, has issued an executive order on artificial intelligence (AI), outlining a strategic plan for the responsible design, development, integration, and management of emerging AI technologies. The order acknowledges the potential benefits and risks associated with generative AI tools and calls for a united governance approach to address these challenges. Among the requirements for state agencies are the submission of a report within 60 days of order issuance, detailing the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools, and a risk analysis of potential threats and vulnerabilities of California’s critical energy infrastructure related to generative AI by March 2024. The order also establishes guidelines for public sector procurement, sets up a pilot programme, and mandates training for state government workers' use of generative AI tools to achieve equitable outcomes by no later than July 2024.

August 2023

California’s Progress Towards Regulating AI

California is taking significant steps towards regulating AI, with multiple proposed laws aimed at making AI systems safer and fairer. AB-331 seeks to prohibit the use of automated decision tools that contribute or result in algorithmic discrimination, while the California Workplace Technology Accountability Act focused on regulating worker information systems and electronic monitoring in the workplace. Modifications have also been proposed to California’s existing employment regulations to address the use of AI in employment decisions. Additionally, SB-313 seeks to establish the Office of Artificial Intelligence within the Department of Technology to guide the design, use, and deployment of automated systems used by state agencies with the aim of minimizing bias.

June 2023

AI Disclosure Act of 2023: A Step Towards Algorithmic Transparency

The AI Disclosure Act of 2023 is a federal bill introduced by U.S. Representative Ritchie Torres of New York's 15th Congressional District that seeks to create greater transparency around the use of generative AI. The bill requires any outputs generated by artificial intelligence to be accompanied by a disclaimer indicating that it was generated by AI. Violating this requirement will result in penalties, privileges, and immunities under the Federal Trade Commission Act. The AI Disclosure Act is an important step towards algorithmic transparency, but it is not the first initiative to increase algorithmic transparency. Other initiatives include the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144, Maryland’s HB1202, and the EU AI Act. Organizations using AI should prepare for compliance with transparency requirements in advance to ensure compliance.

April 2023

The EEOC Releases a Joint Statement on AI and Automated Systems

The Equal Employment Opportunity Commission (EEOC) has joined forces with the Consumer Financial Protection Bureau (CFPB), the Department of Justice's Civil Rights Division (DOJ), and the Federal Trade Commission (FTC) to issue a joint statement on the use of artificial intelligence (AI) and automated systems. The statement emphasizes the need to ensure that the use of AI and automated systems does not violate federal laws related to fairness, equality, and justice. The EEOC has also launched an AI and algorithmic fairness initiative, published guidance on AI-driven assessments and drafted a strategic enforcement plan for 2023-2027. The statement warns about the risk of discriminatory outcomes resulting from automated systems trained on biased, imbalanced, or erroneous data or without considering the social context.