March 2023
The UK Government has published a White Paper outlining a regulatory framework for AI, based on five key principles of safety, transparency, fairness, accountability and contestability. The approach seeks to promote responsible innovation and maintain public trust. The White Paper establishes a multi-regulator sandbox and recommends practical guidance to help businesses put these principles into practice.
February 2023
California State Senator Bill Dodd introduced Senate Bill 313 to regulate the use of AI in California. The Bill aims to establish the Office of Artificial Intelligence within the Department of Technology to guide the design and deployment of automated systems by state agencies, ensuring compliance with state and federal regulations and minimizing bias. It also prioritizes fairness, transparency, and accountability to prevent discrimination and protect privacy and civil liberties. The Bill lacks specific actions and enforcement guidelines, but future amendments will likely address this. Holistic AI offers compliance services for AI regulations.
January 2023
The Society for Industrial and Organizational Psychology (SIOP) has released guidelines on the validation and use of AI-based assessments in employee selection. These guidelines are based on five principles, including accurate prediction of job performance, consistent scores, fairness and unbiased scores, appropriate use, and adequate documentation for decision-making. Compliance with these principles requires validation of tools, equitable treatment of groups, identifying and mitigating predictive and measurement bias, and using informed approaches. The guidelines also recommend increasing transparency and fairness in AI-driven assessments, documenting decision-making processes, and complying with bias audits in NYC Local Law 144. This article is informational and not intended to provide legal advice.
The National Institute of Standards and Technology (NIST) has launched its first version of the Artificial Intelligence Risk Management Framework (AI RMF) after 18 months of development. The framework is designed to help organisations prevent, detect, mitigate, and manage AI risks and promote the adoption of trustworthy AI systems. The AI RMF focuses on flexibility, measurement, and trustworthiness and requires organisations to cultivate a risk management culture. NIST anticipates that the feedback received from organisations using the framework will establish global gold standards in line with EU regulations.
The US Equal Employment Opportunity Commission (EEOC) has published a Strategic Enforcement Plan (SEP) for the 2023-2027 fiscal years, which prioritizes the regulation of AI and automated employment tools to prevent discrimination against protected groups. The EEOC aims to ensure that these tools do not disproportionately impact protected subgroups, and has launched initiatives to examine the impact of AI on employment decisions. The EEOC recently sued iTutorGroup for age discrimination due to their use of software to reject older applicants, highlighting the importance of regulation in preventing AI-related discrimination in employment.