January 2023

Insightful Resources for Uncovering Bias in English Speech Recognition

Speech recognition technology has many applications, but bias can lead to poor performance for certain groups, such as non-native speakers, older adults, and people with disabilities. To mitigate bias, it is essential to use diverse training data and continually evaluate and enhance the system's performance on underrepresented groups. To diagnose bias, annotated data is needed, and metrics such as Character Error Rate (CER), Word Error Rate (WER), and Dialect Density Measure (DDM) can be used. Several datasets are available to analyze bias in ASR systems, such as the Speech Accent Archive, ACL Anthology, Santa Barbara Corpus of Spoken American English, Datatang's British English Speech Dataset, and the Artie Bias Corpus.

SIOP Publishes Guidelines on AI-Based Employee Selection Assessments

The Society for Industrial and Organizational Psychology (SIOP) has released guidelines on the validation and use of AI-based assessments in employee selection. These guidelines are based on five principles, including accurate prediction of job performance, consistent scores, fairness and unbiased scores, appropriate use, and adequate documentation for decision-making. Compliance with these principles requires validation of tools, equitable treatment of groups, identifying and mitigating predictive and measurement bias, and using informed approaches. The guidelines also recommend increasing transparency and fairness in AI-driven assessments, documenting decision-making processes, and complying with bias audits in NYC Local Law 144. This article is informational and not intended to provide legal advice.

NIST Launches AI Risk Management Framework 1.0

The National Institute of Standards and Technology (NIST) has launched its first version of the Artificial Intelligence Risk Management Framework (AI RMF) after 18 months of development. The framework is designed to help organisations prevent, detect, mitigate, and manage AI risks and promote the adoption of trustworthy AI systems. The AI RMF focuses on flexibility, measurement, and trustworthiness and requires organisations to cultivate a risk management culture. NIST anticipates that the feedback received from organisations using the framework will establish global gold standards in line with EU regulations.

Key Takeaways from the Department of Consumer and Worker Protection’s Second Public Hearing on NYC Local Law 144

The enforcement date for NYC Local Law 144 has been pushed back to 5 July 2023, and the city has held a second public hearing on the proposed rules. Auditors must be independent third parties, and there is support for widening the scope of audits beyond the bias risk vertical. There are concerns that the definitions of AEDTs in the updated rules are too narrow, potentially allowing bad-faith actors to argue that they are not within the scope of the legislation. A third version of the rules may be released before the law goes into effect.

Disparate Impact in Bias Audits: Evaluating the DCWP’s Impact Ratio Metrics for Regression Systems

New York City passed Local Law 144 in November 2021 to mandate bias audits of automated employment decision tools (AEDTs) used in candidate screening and promotion. The Department of Consumer and Worker Protection (DCWP) proposed metrics to calculate impact ratios for regression systems, but they have limitations, such as being fooled by unexpected distributions and data tweaking. The article suggests using different metrics that consider fairness over the whole distribution, tests to compare different distributions, or metrics that compare the ranking of candidates rather than the score itself. Holistic AI offers an open-source library of metrics for both binary and regression systems and bias mitigation strategies.