October 2023

Biden Administration Signs Executive Order on AI

On October 30th, 2023, the Biden Administration signed an Executive Order on artificial intelligence aimed at establishing safety and security standards to protect Americans' privacy and civil rights. The order sets strict standards for AI testing, requires agencies to establish standards for biological synthesis screening and cybersecurity programs, and directs AI safety for the military and intelligence. It also aims to protect the equal opportunity and non-discrimination rights of U.S. citizens, provide resources to shape the transformative potential of AI in education and healthcare, and address job displacement caused by AI. The order promotes innovation and competition by providing support for small developers and entrepreneurs and expanding visas for skilled immigrants. The order also directs actions for international cooperation towards safe, secure, and trustworthy AI. The order requires AI companies to disclose their safety testing procedures and results to the U.S. Federal government. The Executive Order follows other recent efforts towards responsible AI by the Biden Administration.

US Algorithmic Accountability Act: Third Time Lucky?

Legislation and regulation are increasingly recognized as important to promote safety, fairness, and ethics in the use of AI tools. While the US has made progress on vertical legislation targeting specific use cases, Europe has made strides in horizontal legislation targeting multiple use cases at once, with the EU AI Act seeking to become the global gold standard for AI regulation. The Algorithmic Accountability Act, introduced for the third time in the US, targets automated decision systems used in augmented critical decision processes and applies to entities over which the Federal Trade Commission has jurisdiction. Covered entities must conduct ongoing algorithmic impact assessments and submit annual summary reports to the FTC. The US is determined to impose more conditions on the use of algorithms and AI, with enterprises needing to navigate an influx of rules.

The EU AI Act and its Potential Impact on Enterprises Harnessing the Power of AI

The European Union's ongoing work on the EU AI Act will affect the entire AI industry and regulate AI systems based on risk classification. High-risk AI systems will be subject to a comprehensive list of requirements, while low-risk AI systems will explore the feasibility of voluntary codes of conduct. The EU AI Act is expected to become the gold standard for the AI industry, and enterprises using AI systems are advised to prepare for compliance early to avoid financial burden and penalties for non-compliance. The exact pathway for compliance will depend on the unique circumstances of each enterprise.

Digital Services Act: European Commission Publishes Final Delegated Regulation on Conducting Independent Audits

On October 20, 2023, the European Commission published its final version of the Delegated Regulation on conducting Independent Audits for Very large Online Platforms (VLOPs) and Search Engines (VLOSEs) under the Digital Services Act (DSA). The rules seek to provide guidance to audited providers and auditors on the audit process, reporting templates, and procedural details. Holistic AI, a leader in AI Assurance and Algorithm Auditing, offers independent annual audits and other compliance services to covered entities under the DSA. The company provides customized solutions to assist businesses in complying with the regulation and offers a Final Audit Report with operational recommendations and risk analysis.

What is New York City Local Law 144?

New York City Local Law 144 requires annual independent bias audits of automated employment decision tools (AEDTs) for employers and employment agencies using them to evaluate candidates for employment or employees for promotion in NYC. The law also imposes transparency and notification requirements. AEDTs are defined as computation processes derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issue a simplified output to assist or replace discretionary decision-making. A bias audit assesses if the AEDT results in disparate impact against individuals based on race/ethnicity and/or sex/gender. Auditors must be independent and impartial, and employers/employment agencies must disclose certain data and provide notice to candidates and employees at least ten working days before using an AEDT. Penalties for non-compliance range from $500 to $1500 per violation. Holistic AI offers independent AI Bias Audits to help employers comply with the law.