July 2023

Mapping AI Standards Across AI Governance, Risk and Compliance

The development and establishment of artificial intelligence (AI) standards has become a pressing necessity as the ecosystem of AI rapidly evolves. Standards act as common guidelines, principles and technical specifications for the development, deployment and governance of AI systems. Technical standards in AI governance encompass foundational, process, measurement, and performance standards. Adopting standards enables organizations to benchmark, audit, and assess AI systems, ensuring conformity and performance evaluation, benefiting developers, consumers, and data subjects impacted by AI technologies. Standards bodies, such as the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC), facilitate the development of consensus-driven standards through multi-stakeholder deliberations, promoting global and regional harmonisation.

Fact or Fiction: What Are the Different LLM Hallucination Types?

Large language models (LLMs) can produce hallucinations, which are inaccuracies in the information generated that are neither present in nor correctly inferred from their training data. There are four primary categories of hallucinations, including dialogue history-based hallucinations, hallucinations in abstractive summarisation, generative question answering hallucinations, and general data generation hallucinations. The root cause of hallucinations is the probabilistic nature of LLMs, which lack common sense or factual knowledge and instead rely on pattern recognition and statistics. Holistic AI offers innovative solutions and an AI risk management platform to help organizations navigate the complex terrain of AI and harness the power of LLMs with confidence and efficiency. This blog article is not intended to provide legal advice or opinion and is not a substitute for experienced legal counsel.

AI in HR: Comparing the EU AI Act and NYC Local Law 144

The HR industry is facing challenges in ensuring consistent compliance with an expanding and fluctuating set of regulations, particularly in relation to the use of AI and automated decision-making tools. Two key pieces of legislation are New York City's Local Law 144 and the EU AI Act, which both aim to ensure fairness, transparency and accountability in their respective jurisdictions. Local Law 144 applies specifically to employers or employment agencies using automated employment decision tools, while the AI Act covers a broad range of AI systems and will apply to providers of AI systems established within the European Union or used within it. Both laws have associated penalties for non-compliance, including financial penalties and reputational and operational risks. Holistic AI offers expertise in AI auditing to help organisations navigate these complexities. Applicants should note that this blog article does not offer legal advice or opinion.

What happens if you don't comply with the NYC Bias Audit Law?

New York City Local Law 144 (NYC Bias Audit Law) now requires every employer or employment agency that uses automated employment decision tools (AEDTs) to obtain an independent bias audit of these systems. The audit must be repeated at least annually, and organizations must post a summary of the results on their official website or in a written policy or procedure for promotion candidates. The law imposes legal penalties on those who fail to comply, including significant fines, potential litigation, and human rights claims. AEDTs are defined as systems that rely on machine learning, statistical modeling, artificial intelligence, or data analytics. Non-compliance penalties range from $500 for the first default and each additional default on the same day to $500 to $1,500 for each subsequent default, with each day an AEDT without having been audited counting as a separate default. The law also allows for potential lawsuits and human rights actions. Holistic AI can help with Local Law 144 compliance and other AI legislation.

AI and Lawsuits: The Need for AI Risk Management

The use of Artificial Intelligence (AI) in high-risk applications is raising legal and ethical questions, leading to legal battles and class actions. The consequences of AI misuse can be severe, including job loss, asset seizure, and wrongful placement of children, prompting the need for AI Risk Management. Several industries have faced legal action due to discriminatory outcomes resulting from AI use, such as HR Tech Lawsuits and Insurance Tech Lawsuits. To avoid such outcomes, due diligence is crucial when outsourcing AI systems, and it is vital to measure and mitigate bias and understand additional risks, such as explainability, robustness, security, and privacy.