February 2024
Derek Mobley has filed an amended complaint in the ongoing class action lawsuit against Workday, alleging that the firm's algorithm-based applicant screening tools discriminate against individuals based on race, age and disability. Mobley seeks certification of the case as a class action, designating himself as the representative of parties facing alleged discriminatory screening practices, and is seeking a preliminary and permanent injunction against Workday to restrain the company from engaging in the alleged unlawful activities. He is also requesting an order mandating Workday to institute and execute policies that provide equal employment opportunities for all minorities as well as monetary compensation in terms of back pay, front pay and other financial relief.
The Association of Southeast Asian Nations (ASEAN) has released a guide on AI governance and ethics to empower organizations and governments in Southeast Asia to navigate the landscape of artificial intelligence responsibly. The guide outlines three fundamental objectives towards responsible AI development, including practical guidance on designing, developing, and deploying AI systems, promoting consistency and collaboration in AI governance efforts between member states, and empowering policymakers to shape regulatory frameworks that promote responsible AI practices. The guide provides seven guiding principles serving as foundational pillars for responsible AI governance, including transparency and explainability, fairness and equity, and privacy and data governance. It recommends the Establishment of an ASEAN Working Group on AI Governance, nurturing AI talent, promoting investment in AI startups, investing in AI research and development, promoting adoption of AI governance tools, and raising awareness among citizens. The guide emphasizes the need for collaborative efforts between governments, private sectors, and educational institutions to foster ethical AI development and adoption within the region.
The EU has proposed the Harmonised Rules on Artificial Intelligence (EU AI Act) to lead the world in AI regulation and build trust in AI systems. The Act sets out a risk-based approach for AI systems, defines three levels of risk, and subjects certain AI systems to transparency obligations. The Act also introduces a three-tiered model of penalties for violators, with the heftiest fines imposed on those who violate the prohibition of specific AI systems. The EU AI Act has gone through an extended consultation process and has been subject to amendments throughout. Penalties can be issued to providers, deployers, importers, distributors, and notified bodies. The EU AI Act emphasizes proportionality and offers lower penalties for SMEs and startups. There is no union-wide central authority for imposing fines on AI operators, so penalties depend on the national legal system of Member States. The fines for providers of GPAI models and Union bodies are imposed by the Commission and the European Data Protection Supervisor, respectively.
Ensuring the integrity, safety, security, and reliability of generative AI models is crucial for organizations developing and deploying them. Two important processes to achieve this are model evaluations and algorithm audits. While model evaluations assess a model's efficacy across various parameters, such as performance levels and risks, algorithm audits involve independent third-party assessments of reliability, risk detection, and regulatory compliance. Both processes should be used jointly to build the evidence base of a model's safety and risk mitigation capabilities. As regulatory momentum to legislate generative models accelerates, companies must proactively ensure they fulfill their obligations. Holistic AI's LLM Auditing product is a solution that can help identify and address issues such as blocking serious risks, detecting hallucinations and stereotypes, preventing offensive language and toxicity, and providing readability scores.
The Federal Trade Commission (FTC) has issued orders to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI to provide information about their investments and partnerships in generative AI companies, citing concerns about how these investments may distort innovation and undermine fair competition. The companies must provide information on agreements and related documents, interaction and influence, analyses and reports, documents related to exclusivity and access, materials provided to government entities, specifications for document production, use of technology, and contact information and communication. The FTC aims to better understand the competitive landscape and potential implications of AI collaborations to ensure fair competition and prevent practices that could stifle innovation.