October 2022

How to Manage the Risk of AI Bias in Identity Verification

The increasing use of remote identity verification (IDV) technology has created new risks and ethical implications, including barriers to participation in banking and time-critical products such as access to credit. Machine learning (ML) models enable IDV by extracting relevant data from the identity document and validating the original document, then performing facial verification between the photo presented in the identity document and the selfie taken within the IDV app. However, the quality of the datasets used to train the ML models can lead to algorithmic bias and inaccuracies, which can result in individuals being unfairly treated. Managing the potential risks of AI bias in IDV requires technical assessment of the AI system’s code and data, independent auditing, testing, and review against bias metrics, and establishing policies and processes to govern the use of AI.

The White House Publishes its Blueprint for an AI Bill of Rights

AI risk management is becoming a global priority due to high-profile instances of harm resulting from the use of artificial intelligence. Several countries, including the US and DC, have proposed legislation and frameworks to regulate the use of AI. Illinois and New York City Council have enacted laws requiring the notification and audit of discriminatory patterns in AI-based employment decisions, while legislation enacted in Colorado prevents insurance providers from using biased algorithms or data to make decisions. The White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights to protect US citizens from potential AI harm. The Blueprint is based on five key pillars, including safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. A handbook, From Principles to Practice, was also published to help implement the framework. AI risk management is important, and Holistic AI can help businesses take command and control over their AI systems.

What Enterprises Need to Know About the EU’s AI Liability Directive

The EU has proposed the AI Liability Directive to make it easier for victims of AI-induced harm to prove liability and receive compensation for damages. The Directive reinforces the EU AI Act, which aims to prevent harm caused by AI. Enterprises that develop or deploy AI systems should establish robust AI risk management processes and prepare for compliance with the AI Act. The Directive empowers courts to order the disclosure of evidence regarding high-risk AI systems, and introduces a presumption of a causal link between non-compliance with relevant laws and AI-induced harm. Enterprises may be obliged to disclose information regarding their AI risk management framework, system design specifications, and oversight of the AI system. Claimants can be the injured individual, an insurance company, or the heirs of a deceased person. Enterprises should act now to establish robust AI risk management systems to ensure that their AI risks are detected, minimised, monitored and prevented.

AI in Insurance Needs to be Safe and Fair

The global AI market is set to reach $500 billion by 2023, and the insurance industry is using AI in purchasing, underwriting and claims activities. However, there are risks of perpetuating existing biases such as charging higher car insurance premiums to minority groups. As a result, AI risk management frameworks are needed in the insurance industry, and forthcoming regulations such as the EU AI Act and Colorado State legislation aim to ensure high-quality and unbiased data, transparency, and appropriate accountability. Taking early steps in managing AI risks allows enterprises to embrace AI with more confidence.

September 2022

New York City Government Proposes Changes to the Mandatory Bias Audit Law: Here’s Everything You Need to Know

The New York City Department of Consumer and Worker Protection has proposed amendments to Local Law 144, which mandates independent bias audits of AI-powered tools used for screening, hiring or promoting candidates or employees residing in NYC. The amendments clarify the definition of an independent auditor, which must be a group or person not involved in developing or using an automated tool, and which further outlines the selection rate and impact ratio for different races/ethnicities and sexes that an audit must examine. Employers and employment agencies must publish the summary of results on their website for at least six months after the relevant tool was last used. The law will take effect on 5 July 2023.