July 2023
New York is taking the lead in regulating automated employment decision tools (AEDTs), proposing three laws at both state and local levels to increase transparency and safety around their use in hiring. These laws seek to impose requirements such as bias audits, notification of AEDT use, and disparate impact analysis. AEDTs are defined as a computation process that uses machine learning, statistical modelling, data analytics, or artificial intelligence to issue a simplified output that is used to substantially assist or replace discretionary decision making for employment decisions. The use of such tools has been fueled by the pandemic and can reduce the time to fill open positions, improve candidate experience, and increase diversity. However, there have been instances of misuse and inappropriate development of these tools, resulting in high-profile scandals and lawsuits.
States in the US are introducing legislation to regulate HR Tech, with New York proposing legislation targeting automated employment decision tools (AEDTs), and California proposing multiple pieces of legislation. Federal efforts have now emerged, with Senators introducing the No Robot Bosses Act and Exploitative Workplace Surveillance and Technologies Task Force Act. The No Robot Bosses Act seeks to protect job applicants and employees from the undisclosed use of automated decision systems, requiring employers to provide notice of when and how the systems are used. The Exploitative Workplace Surveillance and Technologies Task Force Act seeks to create an interagency task force to lead a whole government study and report to Congress on workplace surveillance. The Biden-Harris administration have secured voluntary agreements from various AI companies to ensure products are safe and that public trust is built.
The development and establishment of artificial intelligence (AI) standards has become a pressing necessity as the ecosystem of AI rapidly evolves. Standards act as common guidelines, principles and technical specifications for the development, deployment and governance of AI systems. Technical standards in AI governance encompass foundational, process, measurement, and performance standards. Adopting standards enables organizations to benchmark, audit, and assess AI systems, ensuring conformity and performance evaluation, benefiting developers, consumers, and data subjects impacted by AI technologies. Standards bodies, such as the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC), facilitate the development of consensus-driven standards through multi-stakeholder deliberations, promoting global and regional harmonisation.
Large language models (LLMs) can produce hallucinations, which are inaccuracies in the information generated that are neither present in nor correctly inferred from their training data. There are four primary categories of hallucinations, including dialogue history-based hallucinations, hallucinations in abstractive summarisation, generative question answering hallucinations, and general data generation hallucinations. The root cause of hallucinations is the probabilistic nature of LLMs, which lack common sense or factual knowledge and instead rely on pattern recognition and statistics. Holistic AI offers innovative solutions and an AI risk management platform to help organizations navigate the complex terrain of AI and harness the power of LLMs with confidence and efficiency. This blog article is not intended to provide legal advice or opinion and is not a substitute for experienced legal counsel.
The HR industry is facing challenges in ensuring consistent compliance with an expanding and fluctuating set of regulations, particularly in relation to the use of AI and automated decision-making tools. Two key pieces of legislation are New York City's Local Law 144 and the EU AI Act, which both aim to ensure fairness, transparency and accountability in their respective jurisdictions. Local Law 144 applies specifically to employers or employment agencies using automated employment decision tools, while the AI Act covers a broad range of AI systems and will apply to providers of AI systems established within the European Union or used within it. Both laws have associated penalties for non-compliance, including financial penalties and reputational and operational risks. Holistic AI offers expertise in AI auditing to help organisations navigate these complexities. Applicants should note that this blog article does not offer legal advice or opinion.