October 2022

How to Manage the Risk of AI Bias in Identity Verification

The increasing use of remote identity verification (IDV) technology has created new risks and ethical implications, including barriers to participation in banking and time-critical products such as access to credit. Machine learning (ML) models enable IDV by extracting relevant data from the identity document and validating the original document, then performing facial verification between the photo presented in the identity document and the selfie taken within the IDV app. However, the quality of the datasets used to train the ML models can lead to algorithmic bias and inaccuracies, which can result in individuals being unfairly treated. Managing the potential risks of AI bias in IDV requires technical assessment of the AI system’s code and data, independent auditing, testing, and review against bias metrics, and establishing policies and processes to govern the use of AI.

August 2022

Facial Recognition is a Controversial and High-Risk Technology. Algorithmic Risk Management Can He

Facial recognition technology is used for various applications, but it is a controversial and high-risk technology. Under the EU AI Act, facial recognition systems are considered high-risk and subject to additional restrictions, and some harms of the technology have already been realized, including racial bias and low accuracy rates. Some policymakers have banned the use of facial recognition, and compliance with relevant data protection laws such as GDPR is necessary. Algorithmic risk management can help manage the risks associated with facial recognition technology by addressing bias, privacy, safety, and transparency concerns.