August 2023
California is taking significant steps towards regulating AI, with multiple proposed laws aimed at making AI systems safer and fairer. AB-331 seeks to prohibit the use of automated decision tools that contribute or result in algorithmic discrimination, while the California Workplace Technology Accountability Act focused on regulating worker information systems and electronic monitoring in the workplace. Modifications have also been proposed to California’s existing employment regulations to address the use of AI in employment decisions. Additionally, SB-313 seeks to establish the Office of Artificial Intelligence within the Department of Technology to guide the design, use, and deployment of automated systems used by state agencies with the aim of minimizing bias.
Artificial intelligence (AI) and automation are rapidly transforming the insurance sector, with $8 billion invested in insurtech start-ups between 2018 and 2019. However, the use of algorithms in insurance has come under fire for biased outcomes, resulting in policy makers introducing regulation targeting the algorithms used in insurance. Colorado's Senate Bill 21-169 and the European Commission's EU AI Act seek to prohibit insurers from unfair discrimination based on protected characteristics and ensure that AI systems meet certain obligations. The National Association of Insurance Commissioners has also emphasized the importance of accountability, compliance, and transparency in the use of AI in insurance throughout its entire lifecycle.
The Equal Employment Opportunity Commission (EEOC) settled a lawsuit with iTutorGroup for $365,000 over AI-driven age discrimination, which is the first settlement against AI-powered recruitment tools in the US. The iTutorGroup used an algorithm in 2020 that automatically rejected older applicants due to their age, violating the Age Discrimination Act. The settlement prohibits the iTutorGroup from automatically rejecting tutors over 40 or anyone based on their sex and is expected to comply with all relevant non-discrimination laws. HR Tech tools are likely to face more lawsuits targeting automated employment decision tools across the US.
The Council of the European Union has released a summary of the provisional agreements, pending items, and future priorities discussed during the second trilogue on the EU AI Act. The act aims to set the global standard for AI regulation through a risk-based approach and requires certain obligations of providers, users, and deployers of high-risk AI systems. The legislative process is expected to finish by the end of 2023, and the final text is likely to be published shortly after. The EU AI Act will impact EU and non-EU companies operating AI systems within EU borders, and organizations can seek help from Holistic AI to manage their AI risks. This article is for informational purposes only and does not provide legal advice.
Explainable AI (XAI) is a new paradigm in AI that brings transparency and understanding to complex machine learning models. Two strategies for global feature importance in ML models are permutation feature importance and surrogacy feature importance. Permutation feature importance involves systematically shuffling the values of a single feature while keeping the other set of features unchanged to observe how the shuffling impacts the predictive accuracy or performance metric of the model. Surrogacy feature importance relies on creating interpretable surrogate models to gain insights into complex black box models. These techniques enable stakeholders to trust the predictions of the model and make informed decisions based on the model's output, leading to a culture of transparent and trustworthy AI systems. Holistic AI is a company that helps organizations validate their machine learning-based systems to allow safe, transparent, and reliable use of AI.