October 2023

10 things you need to know about Colorado Division of Insurance’s Bias Audit Proposals for Life Insurance Underwriting

Colorado has passed a law prohibiting unfair discrimination in insurance practices, targeting external customer information sources, algorithms, and predictive models. The law prohibits discrimination based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The Colorado Commissioner of Insurance is tasked with developing specific requirements for different types of insurance and insurance practices. Life insurers must carry out quantitative testing using cumulative data collected up until 31 December of the previous year and all years prior to determine whether there is a statistically different disapproval rate or difference in premium rate per $1000 face value amount of policies for Hispanic, Black, and Asian Pacific Islanders compared to White insureds. Noncompliance with the regulation can result in sanctions.

EU AI Act: Updates from the Ground

The EU AI Act is expected to pass European legislative procedures by the end of 2023, with a grace period of two to three years before it comes into force. The Act is focused on consumer protection rather than product safety legislation and relies heavily on standards and implementing acts. While there have been agreements made on some provisions of the Act, controversial elements such as definitions remain uncertain. The EU and the US have adopted different approaches to AI regulation, and the world's first comprehensive AI regulation is still being discussed. The uncertainty surrounding the Act has created industry interest in risk assessment, compliance tools, and protective measures.

How is AI in Healthcare Being Regulated?

Policymakers around the world are looking to regulate the use of AI in critical applications such as healthcare to address the potential risks and implications for patient care and wellbeing. The EU AI Act establishes a risk-based approach, categorizing systems based on their level of risk, and outlining specific obligations to be met. In the US, there are horizontal pieces of legislation, such as the Algorithmic Accountability Act and DC's Stop Discrimination by Algorithms Act, which address issues such as bias and discrimination. However, experts argue that given the unique risks and needs in healthcare, specific regulation for AI in healthcare is necessary to prevent harm while allowing appropriate considerations for patient demographics.

Does California have an AI Act?

California is among the states leading in proposing laws to regulate AI at the local, state, and federal levels. While California does not yet have a dedicated Artificial Intelligence Act, multiple initiatives have been proposed. California Assembly Bill 302 requires the Department of Technology to establish the inventory of all high-risk automated decision systems being used by any state agency and submit an annual report to the designated committees. Senate Bill 313 seeks to regulate the use of AI by state agencies and establish the Office of Artificial Intelligence within the Department of Technology. Assembly Bill 331 aims to prohibit algorithmic discrimination through automated decision tools and impose obligations on developers and deployers to conduct an impact assessment and establish a governance program to manage risks. Senate Concurrent Resolution 17 affirms California Legislature's commitment to the White House's Blueprint for an AI Bill of Rights. It is unclear whether California will have an AI Act, but policymakers are recognizing the need for AI legislation and targeting multiple critical applications.

September 2023

How is the FTC Regulating AI?

The Federal Trade Commission (FTC), which is responsible for protecting consumers and enforcing civil antitrust law in the US, is becoming increasingly vocal about the regulation of AI and the risks it poses to consumers and competition. The FTC has published warnings about biased AI, issued guidance on AI and algorithms, and highlighted the applicability of existing laws to AI, including section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act. The FTC has emphasized the need for AI transparency, accountability, and independent standards, as well as the importance of ensuring training data for algorithms is representative and high-quality. The FTC has also highlighted the role of regulators in preventing a small number of firms from dominating the market and hindering competition. The FTC is investigating OpenAI for consumer protection concerns related to its chatbot, ChatGTP, which made false claims about individuals and may have leaked data. Ongoing compliance and risk management are essential to protect against preventable harms and ensure that AI is an asset, not a liability.