October 2022

AI in Insurance Needs to be Safe and Fair

The global AI market is set to reach $500 billion by 2023, and the insurance industry is using AI in purchasing, underwriting and claims activities. However, there are risks of perpetuating existing biases such as charging higher car insurance premiums to minority groups. As a result, AI risk management frameworks are needed in the insurance industry, and forthcoming regulations such as the EU AI Act and Colorado State legislation aim to ensure high-quality and unbiased data, transparency, and appropriate accountability. Taking early steps in managing AI risks allows enterprises to embrace AI with more confidence.

September 2022

New York City Government Proposes Changes to the Mandatory Bias Audit Law: Here’s Everything You Need to Know

The New York City Department of Consumer and Worker Protection has proposed amendments to Local Law 144, which mandates independent bias audits of AI-powered tools used for screening, hiring or promoting candidates or employees residing in NYC. The amendments clarify the definition of an independent auditor, which must be a group or person not involved in developing or using an automated tool, and which further outlines the selection rate and impact ratio for different races/ethnicities and sexes that an audit must examine. Employers and employment agencies must publish the summary of results on their website for at least six months after the relevant tool was last used. The law will take effect on 5 July 2023.

August 2022

What is Bias and How Can it be Mitigated?

Bias refers to unjustified differences in outcomes for different subgroups, which can occur in human decision-making and algorithmic systems. Sources of bias in algorithms include human biases, unbalanced training data, differential feature use, and proxy variables. Bias mitigation strategies include obtaining additional data, adjusting hyperparameters, and removing or reweighing features. Bias audits will soon be required in New York City and can contribute to risk management of algorithmic systems. It is important to seek professional legal advice when dealing with bias in decision-making.

Facial Recognition is a Controversial and High-Risk Technology. Algorithmic Risk Management Can Help

Facial recognition technology is widely used, but also controversial and high-risk due to potential biases, privacy concerns, safety issues, and lack of transparency. Some policymakers have banned facial recognition, while others require compliance with data protection laws. Risk management strategies can help to mitigate these risks, including examining the training data for representativeness and auditing for bias, implementing data management strategies and fail-safes, establishing safeguards to prevent malicious use, and ensuring appropriate disclosure of the use of technology.

Colorado's Legislation to Prevent Discrimination in Insurance - 10 Things You Need to Know About SB21-169

Colorado has enacted legislation that restricts insurers’ use of ‘external consumer data’ and prohibits data, algorithms, or predictive models from unfairly discriminating. Insurers are required to outline the type of external customer data and information sources used by their algorithms and predictive models, establish a risk management framework, provide an assessment of the results of the risk management framework, and provide an attestation that the risk management framework has been implemented. Unfair discrimination occurs when external customer data and information sources or algorithms or predictive models correlate with protected characteristics and result in a disproportionately negative outcome for these groups. The law will come into effect on 1st January 2023 at the earliest. Holistic AI can help firms establish a risk management framework for continuous monitoring of data, algorithms, and predictive models and provide expert evidence for non-discriminatory practices.