August 2023
The Equal Employment Opportunity Commission (EEOC) settled a lawsuit with iTutorGroup for $365,000 over AI-driven age discrimination, which is the first settlement against AI-powered recruitment tools in the US. The iTutorGroup used an algorithm in 2020 that automatically rejected older applicants due to their age, violating the Age Discrimination Act. The settlement prohibits the iTutorGroup from automatically rejecting tutors over 40 or anyone based on their sex and is expected to comply with all relevant non-discrimination laws. HR Tech tools are likely to face more lawsuits targeting automated employment decision tools across the US.
The Council of the European Union has released a summary of the provisional agreements, pending items, and future priorities discussed during the second trilogue on the EU AI Act. The act aims to set the global standard for AI regulation through a risk-based approach and requires certain obligations of providers, users, and deployers of high-risk AI systems. The legislative process is expected to finish by the end of 2023, and the final text is likely to be published shortly after. The EU AI Act will impact EU and non-EU companies operating AI systems within EU borders, and organizations can seek help from Holistic AI to manage their AI risks. This article is for informational purposes only and does not provide legal advice.
Explainable AI (XAI) is a new paradigm in AI that brings transparency and understanding to complex machine learning models. Two strategies for global feature importance in ML models are permutation feature importance and surrogacy feature importance. Permutation feature importance involves systematically shuffling the values of a single feature while keeping the other set of features unchanged to observe how the shuffling impacts the predictive accuracy or performance metric of the model. Surrogacy feature importance relies on creating interpretable surrogate models to gain insights into complex black box models. These techniques enable stakeholders to trust the predictions of the model and make informed decisions based on the model's output, leading to a culture of transparent and trustworthy AI systems. Holistic AI is a company that helps organizations validate their machine learning-based systems to allow safe, transparent, and reliable use of AI.
03 Aug 2023
AI governance refers to the rules and frameworks that ensure the responsible use of AI. It is necessary to mitigate legal, financial, and reputational risks and to promote trust in AI technologies. Effective AI governance involves a multi-layered approach, ranging from organizational structure to regulatory alignment, and it requires the involvement of everyone in an organization. AI governance measures and metrics, such as transparency, bias detection and mitigation, and impact on stakeholders, should be regularly assessed and improved. Effective AI governance offers benefits such as preventing AI harm, meeting legal and regulatory requirements, and promoting scalability and transparency. Holistic AI offers solutions to implement responsible AI governance through independent AI audits, risk assessments, and inventory management.
There is a push to specifically regulate the use of HR tech for employment decisions due to the potential for algorithms trained on biased data to perpetuate bias and have an even greater impact than human prejudices. Algorithmic assessment tools can be more complicated to validate, potentially making it more difficult to justify the tool’s use. Algorithms can reduce the explainability of hiring decisions, so disclosure to applicants on the use of automated tools and how they make decisions may be necessary. Well-crafted laws for HR tech can mandate disclosures to applicants, minimise bias through auditing, and require proper validation of these automated systems, complementing broader anti-discrimination laws. Policymakers worldwide are increasingly targeting HR tech for regulation.