June 2023
The AI Disclosure Act of 2023 is a federal bill introduced by U.S. Representative Ritchie Torres of New York's 15th Congressional District that seeks to create greater transparency around the use of generative AI. The bill requires any outputs generated by artificial intelligence to be accompanied by a disclaimer indicating that it was generated by AI. Violating this requirement will result in penalties, privileges, and immunities under the Federal Trade Commission Act. The AI Disclosure Act is an important step towards algorithmic transparency, but it is not the first initiative to increase algorithmic transparency. Other initiatives include the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144, Maryland’s HB1202, and the EU AI Act. Organizations using AI should prepare for compliance with transparency requirements in advance to ensure compliance.
May 2023
The EU AI Act is a piece of legislation proposed by the European Commission to regulate the AI systems available in the EU market. The Act takes a risk-based approach to regulation, classifying systems as posing minimal risk, limited risk, high risk, or unacceptable levels of risk. The December 2022 text defined AI as a machine-based system designed to operate with autonomy that can generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. The adopted text specifies eight broad high-risk applications of AI, with important updates to biometric and biometrics-based systems. A new Fundamental Rights Impact Assessment obligation has been introduced for users of High-Risk AIs, and prohibited practices now include AI models used for biometric categorization, predictive policing, and the collection of facial images for database construction. The EU AI Act will have important implications for the fairness and safety of AI systems available in the EU market, with deployers and users of AI systems facing a number of obligations.
The European Commission released a draft for conducting audits under the Digital Services Act (DSA) on May 6, 2023, which pertains to the 17 Very Large Online Platforms (VLOPs) and 2 Very Large Online Search Engines (VLOSEs, including Facebook, LinkedIn, Twitter, Bing, and Google). The purpose of this delegated regulation is to promote transparency and public accountability for large platforms, with provisions for annual independent audits. Algorithmic systems will be audited and will include disclosures and risk assessments. The draft clarifies the relationship between Audited Providers and Auditing Organizations, and lays down provisions for selecting auditors, data sharing, and cooperation. Auditing Organisations will send Final Reports, including Risk Analyses and Audit Conclusions, and must be completed within a year from the date of application of the obligations to the Audited Provider. The draft is open for public comments until June 2, 2023. The article promotes Holistic AI's interdisciplinary approach for AI governance, risk, and compliance.
April 2023
The Equal Employment Opportunity Commission (EEOC) has joined forces with the Consumer Financial Protection Bureau (CFPB), the Department of Justice's Civil Rights Division (DOJ), and the Federal Trade Commission (FTC) to issue a joint statement on the use of artificial intelligence (AI) and automated systems. The statement emphasizes the need to ensure that the use of AI and automated systems does not violate federal laws related to fairness, equality, and justice. The EEOC has also launched an AI and algorithmic fairness initiative, published guidance on AI-driven assessments and drafted a strategic enforcement plan for 2023-2027. The statement warns about the risk of discriminatory outcomes resulting from automated systems trained on biased, imbalanced, or erroneous data or without considering the social context.
The New York City Department of Consumer and Worker Protection will enforce its final rules on the Bias Audit Law beginning on July 5, 2023. These rules clarify definitions, modify the calculation of scores, and establish new regulations for independent auditors. The definition for "machine learning, statistical modelling, data analytics, or artificial intelligence" has been expanded, and the requirement for inputs and parameters to be refined through cross-validation or training and testing data has been removed. The adopted rules also require auditors to indicate missing data and exclude categories that comprise less than 2% of the data while justifying the exclusion. The summary of results must also include the number of applicants in each category. Historical data may only be utilized if the employer provides it to the auditor or if the AEDT has never been used before, while test data may only be used if no historical data is available.