October 2022

The White House Publishes its Blueprint for an AI Bill of Rights

Several countries and states have proposed legislation and frameworks to ensure that AI is used in a responsible and ethical manner to prevent further harm. The US has proposed an Algorithmic Accountability Act and DC has proposed the Stop Discrimination by Algorithms Act to prevent discrimination in automated decisions. Illinois has enacted the Artificial Intelligence Video Interview Act which requires employers to notify job applicants that their video interviews are being screened by algorithms. The White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights to protect US citizens from AI harm, based on five key pillars including safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives and fallback. Researchers, technologists, advocates, journalists, and policymakers have also published a handbook From Principles to Practice to support the implementation of the framework. It is recommended that AI risk management is vital to protect against the harms posed by automated systems, and early steps should be taken to take command and control over systems.

What Enterprises Need to Know About the EU’s AI Liability Directive

The EU has proposed the AI Liability Directive to make it easier for victims of AI-induced harm to prove liability and receive compensation for damages. The Directive reinforces the EU AI Act, which aims to prevent harm caused by AI. Enterprises that develop or deploy AI systems should establish robust AI risk management processes and prepare for compliance with the AI Act. The Directive empowers courts to order the disclosure of evidence regarding high-risk AI systems, and introduces a presumption of a causal link between non-compliance with relevant laws and AI-induced harm. Enterprises may be obliged to disclose information regarding their AI risk management framework, system design specifications, and oversight of the AI system. Claimants can be the injured individual, an insurance company, or the heirs of a deceased person. Enterprises should act now to establish robust AI risk management systems to ensure that their AI risks are detected, minimised, monitored and prevented.

AI in Insurance Needs to be Safe and Fair

The global AI market is set to reach $500 billion by 2023, and the insurance industry is using AI in purchasing, underwriting and claims activities. However, there are risks of perpetuating existing biases such as charging higher car insurance premiums to minority groups. As a result, AI risk management frameworks are needed in the insurance industry, and forthcoming regulations such as the EU AI Act and Colorado State legislation aim to ensure high-quality and unbiased data, transparency, and appropriate accountability. Taking early steps in managing AI risks allows enterprises to embrace AI with more confidence.

September 2022

New York City Government Proposes Changes to the Mandatory Bias Audit Law: Here’s Everything You Need to Know

The New York City Department of Consumer and Worker Protection has proposed amendments to Local Law 144, which mandates independent bias audits of AI-powered tools used for screening, hiring or promoting candidates or employees residing in NYC. The amendments clarify the definition of an independent auditor, which must be a group or person not involved in developing or using an automated tool, and which further outlines the selection rate and impact ratio for different races/ethnicities and sexes that an audit must examine. Employers and employment agencies must publish the summary of results on their website for at least six months after the relevant tool was last used. The law will take effect on 5 July 2023.

August 2022

What is Bias and How Can it be Mitigated?

Bias refers to unjustified differences in outcomes for different subgroups, which can occur in human decision-making and algorithmic systems. Sources of bias in algorithms include human biases, unbalanced training data, differential feature use, and proxy variables. Bias mitigation strategies include obtaining additional data, adjusting hyperparameters, and removing or reweighing features. Bias audits will soon be required in New York City and can contribute to risk management of algorithmic systems. It is important to seek professional legal advice when dealing with bias in decision-making.