July 2023

Fact or Fiction: What Are the Different LLM Hallucination Types?

Large language models (LLMs) can produce hallucinations, which are inaccuracies in the information generated that are neither present in nor correctly inferred from their training data. There are four primary categories of hallucinations, including dialogue history-based hallucinations, hallucinations in abstractive summarisation, generative question answering hallucinations, and general data generation hallucinations. The root cause of hallucinations is the probabilistic nature of LLMs, which lack common sense or factual knowledge and instead rely on pattern recognition and statistics. Holistic AI offers innovative solutions and an AI risk management platform to help organizations navigate the complex terrain of AI and harness the power of LLMs with confidence and efficiency. This blog article is not intended to provide legal advice or opinion and is not a substitute for experienced legal counsel.

AI in HR: Comparing the EU AI Act and NYC Local Law 144

The HR industry is facing challenges in ensuring consistent compliance with an expanding and fluctuating set of regulations, particularly in relation to the use of AI and automated decision-making tools. Two key pieces of legislation are New York City's Local Law 144 and the EU AI Act, which both aim to ensure fairness, transparency and accountability in their respective jurisdictions. Local Law 144 applies specifically to employers or employment agencies using automated employment decision tools, while the AI Act covers a broad range of AI systems and will apply to providers of AI systems established within the European Union or used within it. Both laws have associated penalties for non-compliance, including financial penalties and reputational and operational risks. Holistic AI offers expertise in AI auditing to help organisations navigate these complexities. Applicants should note that this blog article does not offer legal advice or opinion.

What happens if you don't comply with the NYC Bias Audit Law?

New York City Local Law 144 (NYC Bias Audit Law) now requires every employer or employment agency that uses automated employment decision tools (AEDTs) to obtain an independent bias audit of these systems. The audit must be repeated at least annually, and organizations must post a summary of the results on their official website or in a written policy or procedure for promotion candidates. The law imposes legal penalties on those who fail to comply, including significant fines, potential litigation, and human rights claims. AEDTs are defined as systems that rely on machine learning, statistical modeling, artificial intelligence, or data analytics. Non-compliance penalties range from $500 for the first default and each additional default on the same day to $500 to $1,500 for each subsequent default, with each day an AEDT without having been audited counting as a separate default. The law also allows for potential lawsuits and human rights actions. Holistic AI can help with Local Law 144 compliance and other AI legislation.

AI and Lawsuits: The Need for AI Risk Management

The use of Artificial Intelligence (AI) in high-risk applications is raising legal and ethical questions, leading to legal battles and class actions. The consequences of AI misuse can be severe, including job loss, asset seizure, and wrongful placement of children, prompting the need for AI Risk Management. Several industries have faced legal action due to discriminatory outcomes resulting from AI use, such as HR Tech Lawsuits and Insurance Tech Lawsuits. To avoid such outcomes, due diligence is crucial when outsourcing AI systems, and it is vital to measure and mitigate bias and understand additional risks, such as explainability, robustness, security, and privacy.

What is the EU AI Act?

The EU AI Act is proposed legislation aimed at creating a global standard for protecting users of AI systems from preventable harm. The Act outlines a risk-based approach, establishing obligations for AI systems based on their level of risk. High-risk systems are subject to more stringent requirements, including continuous risk management, data governance practices, technical documentation, and transparency provisions. The Act also prohibits certain practices deemed to pose too high of a risk, such as the use of subliminal techniques or exploitative practices. Non-compliance with the regulation can result in steep penalties of up to €40 million or 7% of global turnover, whichever is higher. The Act will have far-reaching implications and affect entities that interact with the EU market, even if they are based outside of the EU. The enforcement date of the EU AI Act is dependent on several stages of the EU legislative process.