August 2022

Facial Recognition is a Controversial and High-Risk Technology. Algorithmic Risk Management Can Help

Facial recognition technology is widely used, but also controversial and high-risk due to potential biases, privacy concerns, safety issues, and lack of transparency. Some policymakers have banned facial recognition, while others require compliance with data protection laws. Risk management strategies can help to mitigate these risks, including examining the training data for representativeness and auditing for bias, implementing data management strategies and fail-safes, establishing safeguards to prevent malicious use, and ensuring appropriate disclosure of the use of technology.

Colorado's Legislation to Prevent Discrimination in Insurance - 10 Things You Need to Know About SB21-169

Colorado has enacted legislation that restricts insurers’ use of ‘external consumer data’ and prohibits data, algorithms, or predictive models from unfairly discriminating. Insurers are required to outline the type of external customer data and information sources used by their algorithms and predictive models, establish a risk management framework, provide an assessment of the results of the risk management framework, and provide an attestation that the risk management framework has been implemented. Unfair discrimination occurs when external customer data and information sources or algorithms or predictive models correlate with protected characteristics and result in a disproportionately negative outcome for these groups. The law will come into effect on 1st January 2023 at the earliest. Holistic AI can help firms establish a risk management framework for continuous monitoring of data, algorithms, and predictive models and provide expert evidence for non-discriminatory practices.

NIST’s AI Risk Management Framework Explained

The US National Institute of Standards and Technology (NIST) has published a second draft of its AI Risk Management Framework (AI RMF), which explains how organizations should manage the risks of AI. The AI RMF is a set of voluntary guidelines to help prevent potential harms to people, organizations, or systems resulting from AI systems' development and deployment. The framework has four core elements - govern, map, measure, and manage - that aim to cultivate a culture of AI risk management, establish appropriate structures, policies and processes, understand the AI system's business value, and assess risks with bespoke metrics and methodologies. NIST expects AI risk management to become a core part of doing business by the end of the decade, just like privacy and cybersecurity.

Regulating AI: The Horizontal vs Vertical Approach

To address concerns regarding the use of artificial intelligence (AI), regulations have been developed. There are two approaches to regulating AI: horizontal and vertical. Horizontal regulation applies to all applications of AI across all sectors and is typically controlled by the government, while vertical regulation only applies to a specific application of AI or sector and may be delegated to industry bodies. Each approach has its pros and cons, such as flexibility, standardization, and coordination. Examples of horizontal regulation include the EU AI Act and the US Algorithmic Accountability Act, while examples of vertical regulation include the NYC bias audit mandate and the Illinois Artificial Intelligence Video Interview Act. It is essential to note that this article does not offer legal advice.

Why Do We Need AI Auditing and Assurance?

The use of algorithms and automation brings about many benefits, but it also poses risks, as shown in high-profile cases of harm associated with their use. Applying AI ethics principles could have prevented these harms from occurring. Cases that highlight the risks associated with algorithms include the COMPAS tool, Amazon's scrapped resume screening tool, and Apple's algorithm used to determine credit limits. AI ethics principles could have been useful in mitigating these issues if bias assessments and checks for differential accuracy for subgroups were conducted. It stresses the importance of transparency and explainability when it comes to automated decision tools and the assurance of algorithms to reduce the harm that can result from their use.