January 2023

SIOP Publishes Guidelines on AI-Based Employee Selection Assessments

The Society for Industrial and Organizational Psychology (SIOP) has released guidelines on the validation and use of AI-based assessments in employee selection. These guidelines are based on five principles, including accurate prediction of job performance, consistent scores, fairness and unbiased scores, appropriate use, and adequate documentation for decision-making. Compliance with these principles requires validation of tools, equitable treatment of groups, identifying and mitigating predictive and measurement bias, and using informed approaches. The guidelines also recommend increasing transparency and fairness in AI-driven assessments, documenting decision-making processes, and complying with bias audits in NYC Local Law 144. This article is informational and not intended to provide legal advice.

NIST Launches AI Risk Management Framework 1.0

The National Institute of Standards and Technology (NIST) has launched its first version of the Artificial Intelligence Risk Management Framework (AI RMF) after 18 months of development. The framework is designed to help organisations prevent, detect, mitigate, and manage AI risks and promote the adoption of trustworthy AI systems. The AI RMF focuses on flexibility, measurement, and trustworthiness and requires organisations to cultivate a risk management culture. NIST anticipates that the feedback received from organisations using the framework will establish global gold standards in line with EU regulations.

Key Takeaways from the Department of Consumer and Worker Protection’s Second Public Hearing on NYC Local Law 144

The enforcement date for NYC Local Law 144 has been pushed back to 5 July 2023, and the city has held a second public hearing on the proposed rules. Auditors must be independent third parties, and there is support for widening the scope of audits beyond the bias risk vertical. There are concerns that the definitions of AEDTs in the updated rules are too narrow, potentially allowing bad-faith actors to argue that they are not within the scope of the legislation. A third version of the rules may be released before the law goes into effect.

Disparate Impact in Bias Audits: Evaluating the DCWP’s Impact Ratio Metrics for Regression Systems

New York City passed Local Law 144 in November 2021 to mandate bias audits of automated employment decision tools (AEDTs) used in candidate screening and promotion. The Department of Consumer and Worker Protection (DCWP) proposed metrics to calculate impact ratios for regression systems, but they have limitations, such as being fooled by unexpected distributions and data tweaking. The article suggests using different metrics that consider fairness over the whole distribution, tests to compare different distributions, or metrics that compare the ranking of candidates rather than the score itself. Holistic AI offers an open-source library of metrics for both binary and regression systems and bias mitigation strategies.

Overcoming Small Sample Sizes When Identifying Bias

New York City has passed a new law called Local Law 144 requiring employers and employment agencies to commission independent, impartial bias audits of automated employment decision tools (AEDTs) being used when evaluating candidates for employment or employees for promotion. The bias audits will be based on impact ratios using the Equal Employment Opportunity Commission's four-fifths rule to calculate whether a hiring procedure results in adverse or disparate impact. However, the rule can provide false positives when sample sizes are small, and the NYC legislation doesn't provide guidance on this issue. The enforcement date of Local Law 144 has been delayed to July 5, 2023, providing employers, employment agencies, and vendors more time to collect additional data and make the analysis more robust.