August 2023

Artificial intelligence (AI) and automation are rapidly transforming the insurance sector, with $8 billion invested in insurtech start-ups between 2018 and 2019. However, the use of algorithms in insurance has come under fire for biased outcomes, resulting in policy makers introducing regulation targeting the algorithms used in insurance. Colorado's Senate Bill 21-169 and the European Commission's EU AI Act seek to prohibit insurers from unfair discrimination based on protected characteristics and ensure that AI systems meet certain obligations. The National Association of Insurance Commissioners has also emphasized the importance of accountability, compliance, and transparency in the use of AI in insurance throughout its entire lifecycle.

The Equal Employment Opportunity Commission (EEOC) settled a lawsuit with iTutorGroup for $365,000 over AI-driven age discrimination, which is the first settlement against AI-powered recruitment tools in the US. The iTutorGroup used an algorithm in 2020 that automatically rejected older applicants due to their age, violating the Age Discrimination Act. The settlement prohibits the iTutorGroup from automatically rejecting tutors over 40 or anyone based on their sex and is expected to comply with all relevant non-discrimination laws. HR Tech tools are likely to face more lawsuits targeting automated employment decision tools across the US.

The Council of the European Union has released a summary of the provisional agreements, pending items, and future priorities discussed during the second trilogue on the EU AI Act. The act aims to set the global standard for AI regulation through a risk-based approach and requires certain obligations of providers, users, and deployers of high-risk AI systems. The legislative process is expected to finish by the end of 2023, and the final text is likely to be published shortly after. The EU AI Act will impact EU and non-EU companies operating AI systems within EU borders, and organizations can seek help from Holistic AI to manage their AI risks. This article is for informational purposes only and does not provide legal advice.

Explainable AI (XAI) is a new paradigm in AI that brings transparency and understanding to complex machine learning models. Two strategies for global feature importance in ML models are permutation feature importance and surrogacy feature importance. Permutation feature importance involves systematically shuffling the values of a single feature while keeping the other set of features unchanged to observe how the shuffling impacts the predictive accuracy or performance metric of the model. Surrogacy feature importance relies on creating interpretable surrogate models to gain insights into complex black box models. These techniques enable stakeholders to trust the predictions of the model and make informed decisions based on the model's output, leading to a culture of transparent and trustworthy AI systems. Holistic AI is a company that helps organizations validate their machine learning-based systems to allow safe, transparent, and reliable use of AI.

03 Aug 2023
AI governance refers to the rules and frameworks that ensure the responsible use of AI. It is necessary to mitigate legal, financial, and reputational risks and to promote trust in AI technologies. Effective AI governance involves a multi-layered approach, ranging from organizational structure to regulatory alignment, and it requires the involvement of everyone in an organization. AI governance measures and metrics, such as transparency, bias detection and mitigation, and impact on stakeholders, should be regularly assessed and improved. Effective AI governance offers benefits such as preventing AI harm, meeting legal and regulatory requirements, and promoting scalability and transparency. Holistic AI offers solutions to implement responsible AI governance through independent AI audits, risk assessments, and inventory management.