November 2022

Department of Consumer and Worker Protection’s Public Hearing on the NYC Bias Audit Law (Local Law 144): Key Takeaways 

The Department of Consumer and Worker Protection (DCWP) in New York City held a public hearing on proposed rules for the NYC Bias Audit Legislation. Attendees were given the opportunity to testify, and many called for independent third-party audits to ensure impartiality and compliance with the legislation. There were concerns about the use of impact ratios as a metric for bias in small sample sizes, and more clarification is needed on notice requirements and the role of vendors of automated employment decision tools (AEDTs). Despite overwhelming support for the legislation, there is a need for additional legislation to hold developers and employers accountable for their actions and ensure the safety of AI systems. Holistic AI can help businesses identify risks, establish a risk management framework, and comply with relevant legislation.

October 2022

10 Things You Need to Know About the California Workplace Technology Accountability Act

The proposed California Workplace Technology Accountability Act (AB-1651) aims to increase accountability surrounding the use of technology in the workplace and reduce potential harm. The Act restricts the data that can be collected about workers to only proven business activities, gives workers access to their data, and requires data protection and algorithmic impact assessments. The Act defines automated decision systems, outlines worker rights concerning their data, and sets notification requirements for data collection and electronic monitoring. The Act also outlines impact assessment requirements and consultation processes for workers potentially affected by automated decision tools. The Act applies to employers in California using technology to make employment-related decisions about workers or collect data about them, as well as vendors acting on behalf of employers.

How to Manage the Risk of AI Bias in Identity Verification

The increasing use of remote identity verification (IDV) technology has created new risks and ethical implications, including barriers to participation in banking and time-critical products such as access to credit. Machine learning (ML) models enable IDV by extracting relevant data from the identity document and validating the original document, then performing facial verification between the photo presented in the identity document and the selfie taken within the IDV app. However, the quality of the datasets used to train the ML models can lead to algorithmic bias and inaccuracies, which can result in individuals being unfairly treated. Managing the potential risks of AI bias in IDV requires technical assessment of the AI system’s code and data, independent auditing, testing, and review against bias metrics, and establishing policies and processes to govern the use of AI.

The White House Publishes its Blueprint for an AI Bill of Rights

AI risk management is becoming a global priority due to high-profile instances of harm resulting from the use of artificial intelligence. Several countries, including the US and DC, have proposed legislation and frameworks to regulate the use of AI. Illinois and New York City Council have enacted laws requiring the notification and audit of discriminatory patterns in AI-based employment decisions, while legislation enacted in Colorado prevents insurance providers from using biased algorithms or data to make decisions. The White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights to protect US citizens from potential AI harm. The Blueprint is based on five key pillars, including safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. A handbook, From Principles to Practice, was also published to help implement the framework. AI risk management is important, and Holistic AI can help businesses take command and control over their AI systems.

What Enterprises Need to Know About the EU’s AI Liability Directive

The EU has proposed the AI Liability Directive to make it easier for victims of AI-induced harm to prove liability and receive compensation for damages. The Directive reinforces the EU AI Act, which aims to prevent harm caused by AI. Enterprises that develop or deploy AI systems should establish robust AI risk management processes and prepare for compliance with the AI Act. The Directive empowers courts to order the disclosure of evidence regarding high-risk AI systems, and introduces a presumption of a causal link between non-compliance with relevant laws and AI-induced harm. Enterprises may be obliged to disclose information regarding their AI risk management framework, system design specifications, and oversight of the AI system. Claimants can be the injured individual, an insurance company, or the heirs of a deceased person. Enterprises should act now to establish robust AI risk management systems to ensure that their AI risks are detected, minimised, monitored and prevented.