January 2023

The Society for Industrial and Organizational Psychology (SIOP) has released guidelines on the validation and use of AI-based assessments in employee selection. These guidelines are based on five principles, including accurate prediction of job performance, consistent scores, fairness and unbiased scores, appropriate use, and adequate documentation for decision-making. Compliance with these principles requires validation of tools, equitable treatment of groups, identifying and mitigating predictive and measurement bias, and using informed approaches. The guidelines also recommend increasing transparency and fairness in AI-driven assessments, documenting decision-making processes, and complying with bias audits in NYC Local Law 144. This article is informational and not intended to provide legal advice.

The National Institute of Standards and Technology (NIST) has launched its first version of the Artificial Intelligence Risk Management Framework (AI RMF) after 18 months of development. The framework is designed to help organisations prevent, detect, mitigate, and manage AI risks and promote the adoption of trustworthy AI systems. The AI RMF focuses on flexibility, measurement, and trustworthiness and requires organisations to cultivate a risk management culture. NIST anticipates that the feedback received from organisations using the framework will establish global gold standards in line with EU regulations.

The US Equal Employment Opportunity Commission (EEOC) has published a Strategic Enforcement Plan (SEP) for the 2023-2027 fiscal years, which prioritizes the regulation of AI and automated employment tools to prevent discrimination against protected groups. The EEOC aims to ensure that these tools do not disproportionately impact protected subgroups, and has launched initiatives to examine the impact of AI on employment decisions. The EEOC recently sued iTutorGroup for age discrimination due to their use of software to reject older applicants, highlighting the importance of regulation in preventing AI-related discrimination in employment.
December 2022

New York City's Local Law 144 mandates the use of independent impartial bias audits for automated employment decision tools (AEDTs) used for employment or promotions. The enforcement date has been pushed back to 2023 due to concerns about who qualifies as an independent auditor and the suitability of the impact ratio metrics. The updated rules clarify that bias audits must be conducted by a third party and include metrics for calculated impact ratios based on selection rate or average score. The audit can be based on test data when historical data is not available. Additionally, employers must provide AEDT data retention policies, making them available on their website. Holistic AI offers auditing services for businesses seeking compliance.
April 2022

The enforcement date of the NYC Bias Audit law, which requires companies to have their automated hiring and promotion tools audited for bias, has been postponed twice. The final enforcement date is now 5 July 2023. Holistic AI is an AI risk management company that offers bias audit services to help companies comply with the law. This article is for informational purposes only and does not provide legal advice.