September 2024
NHelp, EPIC, and Upturn have filed a complaint against Deloitte over the accuracy of its Texas Integrated Eligibility Redesign System (TIERS), which is used to evaluate individuals' eligibility for Medicaid. The complaint accuses Deloitte of violating the Federal Trade Commission (FTC) Act, claiming that the system is deceptive and unfair, and cites several established voluntary AI frameworks including the OECD AI Principles and the Universal Guidelines for AI to support its case. The complaint calls for Deloitte to adopt these frameworks, increase transparency, and implement harm mitigation measures. The complaint highlights the value in voluntarily adopting responsible AI frameworks to reduce reputational and legal risks.
The EU AI Act focuses on data governance and management in AI development, with strict requirements for high-risk AI systems and general-purpose AI models to comply with. The Act also addresses the interplay between AI governance and personal data protection law, particularly the GDPR. The Act introduces new legal grounds for personal data processing and also mandates impact assessments to address risks to fundamental human rights and freedoms. However, AI technologies present unique challenges for privacy and personal data protection, and enterprises may need to navigate compliance with multiple regulatory frameworks. Proper preparation is vital to avoid the harsh consequences of non-compliance.
The UK is introducing the Public Authority Algorithmic and Automated Decision-Making Systems Bill (HL Bill 27) to regulate the use of automated and algorithmic tools in decision-making processes within public authorities. The proposed legislation applies to all algorithmic and automated decision-making systems developed or procured by public authorities, excluding those used for national security and routine calculations. The bill introduces requirements for algorithmic impact assessments, algorithmic transparency records, monitoring and auditing, employee training, and adherence to human rights and democratic values. The legislation aims to align with the Council of Europe's Framework Convention on AI.
The Council of Europe's (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law has opened for signature, and has already been signed by several countries including the United States, the European Union, and the United Kingdom. The Convention applies primarily to States and certain international organizations. It establishes a comprehensive set of obligations designed to ensure that all activities throughout the AI system lifecycle align with human rights, democratic principles, and the rule of law. Its central aim is to ensure that AI systems are developed, deployed, and used in ways that respect human rights, democracy, and the rule of law. The Convention also sets out several guiding principles and a risk management framework for AI systems. The Convention is a pivotal step in the global governance of AI, expanding the geographic influence of such initiatives through the Council of Europe's broad reach.
The Governor of Illinois signed HB3773, which amends the Illinois Human Rights Act to limit the use of predictive analytics in the workplace. The bill specifically addresses generative artificial intelligence used in employment decisions and extends provisions on civil rights violations to cover artificial intelligence. The amendment makes it a civil rights violation to use AI to discriminate in the workplace or to use AI without notification. The bill takes effect on 1 January 2026 and applies to employers, employment agencies, and labor organizations in Illinois.