February 2023
The Dutch government is increasing oversight on AI systems following a scandal involving a biased algorithm used by their tax office. The government is committed to a new statutory regime that ensures AI systems are checked for transparency and discrimination, and the data protection regulator will receive extra funding for algorithm oversight. The Dutch government wants more transparency about AI systems deployed in the public sector and is proposing a legal requirement to use an assessment framework, a register of high-risk AI systems, and specific measures for human oversight and non-discrimination. The proposals currently apply to the public sector but will likely impact businesses supplying AI systems to the public sector and create greater public awareness of the use of AI systems.
Large language models (LLMs) such as Galactica, ChatGPT, and BARD are gaining popularity and being integrated into various aspects of daily life. While they have many benefits, society must understand their limitations, biases, and regulatory issues. The use of LLMs raises concerns about responsibility for generated responses and measuring/mitigating bias and discrimination in the models. Regulation could include data representativeness audits and human evaluations to ensure fair and accurate results. It is essential to comprehend the workings of LLMs and put governance mechanisms in place to address associated risks.
Human resources teams in business organizations have been using artificial intelligence (AI) technology to innovate their practices, particularly in talent sourcing and management. However, such practices are now being targeted by regulations promoting transparency, such as pay transparency laws. In 2021, Colorado became the first state to enact a pay transparency law requiring employers to disclose salary or pay ranges for open positions. Other states have followed suit, such as California, Washington, and Rhode Island. New York City has also mandated bias audits for automated employment decision tools to evaluate employees for promotion or candidates for employment. The legislation requires employers subject to the audit to make a summary of the results of the bias audit publicly available on their website. Companies are expected to inform consumers about their use of AI and when they are interacting with it. Holistic AI, a responsible AI pioneer, can help enterprises adopt and scale AI confidently by identifying and mitigating risks and using proposed regulations to inform their product.
California State Senator Bill Dodd introduced Senate Bill 313 to regulate the use of AI in California. The Bill aims to establish the Office of Artificial Intelligence within the Department of Technology to guide the design and deployment of automated systems by state agencies, ensuring compliance with state and federal regulations and minimizing bias. It also prioritizes fairness, transparency, and accountability to prevent discrimination and protect privacy and civil liberties. The Bill lacks specific actions and enforcement guidelines, but future amendments will likely address this. Holistic AI offers compliance services for AI regulations.
The Equal Employment Opportunity Commission (EEOC) has published a draft Strategic Enforcement Plan for 2023-2027, which focuses on the use of algorithms and artificial intelligence (AI) in hiring and how they may lead to employment discrimination. The EEOC recently held a public hearing which explored the implications of AI in employment decisions for US employees and job candidates. Key takeaways from the hearing included concerns with the four-fifths rule as a metric for determining adverse impact, the importance of auditing to mitigate potential biases, and the need to update the scope of Title VII liability to align with technological advancements. Employers and vendors should be aware of and manage risks associated with the use of AI for employee recruitment and selection in light of inevitable enforcement actions.