October 2022
AI risk management is becoming a global priority due to high-profile instances of harm resulting from the use of artificial intelligence. Several countries, including the US and DC, have proposed legislation and frameworks to regulate the use of AI. Illinois and New York City Council have enacted laws requiring the notification and audit of discriminatory patterns in AI-based employment decisions, while legislation enacted in Colorado prevents insurance providers from using biased algorithms or data to make decisions. The White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights to protect US citizens from potential AI harm. The Blueprint is based on five key pillars, including safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. A handbook, From Principles to Practice, was also published to help implement the framework. AI risk management is important, and Holistic AI can help businesses take command and control over their AI systems.
The EU has proposed the AI Liability Directive to make it easier for victims of AI-induced harm to prove liability and receive compensation for damages. The Directive reinforces the EU AI Act, which aims to prevent harm caused by AI. Enterprises that develop or deploy AI systems should establish robust AI risk management processes and prepare for compliance with the AI Act. The Directive empowers courts to order the disclosure of evidence regarding high-risk AI systems, and introduces a presumption of a causal link between non-compliance with relevant laws and AI-induced harm. Enterprises may be obliged to disclose information regarding their AI risk management framework, system design specifications, and oversight of the AI system. Claimants can be the injured individual, an insurance company, or the heirs of a deceased person. Enterprises should act now to establish robust AI risk management systems to ensure that their AI risks are detected, minimised, monitored and prevented.
The global AI market is set to reach $500 billion by 2023, and the insurance industry is using AI in purchasing, underwriting and claims activities. However, there are risks of perpetuating existing biases such as charging higher car insurance premiums to minority groups. As a result, AI risk management frameworks are needed in the insurance industry, and forthcoming regulations such as the EU AI Act and Colorado State legislation aim to ensure high-quality and unbiased data, transparency, and appropriate accountability. Taking early steps in managing AI risks allows enterprises to embrace AI with more confidence.
September 2022
The New York City Department of Consumer and Worker Protection has proposed amendments to Local Law 144, which mandates independent bias audits of AI-powered tools used for screening, hiring or promoting candidates or employees residing in NYC. The amendments clarify the definition of an independent auditor, which must be a group or person not involved in developing or using an automated tool, and which further outlines the selection rate and impact ratio for different races/ethnicities and sexes that an audit must examine. Employers and employment agencies must publish the summary of results on their website for at least six months after the relevant tool was last used. The law will take effect on 5 July 2023.
August 2022
Bias refers to unjustified differences in outcomes for different subgroups, which can occur in human decision-making and algorithmic systems. Sources of bias in algorithms include human biases, unbalanced training data, differential feature use, and proxy variables. Bias mitigation strategies include obtaining additional data, adjusting hyperparameters, and removing or reweighing features. Bias audits will soon be required in New York City and can contribute to risk management of algorithmic systems. It is important to seek professional legal advice when dealing with bias in decision-making.