March 2024

EU AI Act and Sustainability: Environmental Provisions in the EU AI Act

The upcoming EU AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while establishing Europe as a leader in AI. The Act includes provisions concerning the environmental impact and energy consumption of AI systems, such as improving the resource performance of AI systems, reporting and documenting energy consumption, and encouraging compliance with environmental sustainability rules. The Act also establishes regulatory sandboxes to promote innovation under specific conditions, including high-level protection of the environment and energy sustainability. The EU AI Office and Member States will work together to draw up codes of conduct for voluntary application of specific requirements, including minimizing the impact of AI systems on environmental sustainability. The Act also requires regular evaluation and review of environmental provisions, including standardization deliverables and voluntary codes of conduct. Providers of general-purpose AI models must provide detailed information on computational resources used for training and energy consumption. The Act anticipates the fast-paced advancements in AI and allows for exemptions to conformity assessments in specific situations that ensure environmental protection and benefit society overall. Compliance requires a proactive, iterative approach.

How Colorado is Regulating Insurtech with SB21-169

Colorado's Senate Bill 21-169, which seeks to prevent unfair discrimination in insurance practices through the use of external customer data or algorithms, was adopted on 6 July 2021 and came into effect on 1 January 2023. The law requires the Commissioner on insurance to develop specific rules for different types of insurance and insurance practices in collaboration with relevant stakeholders. Rules have already been adopted for life insurance, with a framework that requires life insurers to establish a risk-based governance and risk management framework to support policies, procedures, and systems to determine whether the use of external customer data or predictive models could result in unfair discrimination. Rules are still being developed for private passenger auto insurance, while the consultation process is underway for health insurance. Insurers must provide reports to the Division summarizing the results of testing conducted annually from 1 April 2024.

NIST AI RMF Core Explained

The National Institute for Standards and Technology (NIST) has released a voluntary risk management framework called the AI Risk Management Framework (AI RMF) to help organizations manage the risks associated with AI systems. The framework is adaptable to organizations of all sizes and includes four specific functions: Govern, Map, Measure, and Manage. The AI RMF also emphasizes four key themes: Adaptability, Accountability, Diversity, and Iteration. The framework is a resource for organizations who design, develop, deploy, or use AI systems and was developed following an 18-month consultation process with private and public sector groups.

Horizon Scan: The Key AI Laws Targeting Insurance You Need to Know in the US

American policymakers are increasingly regulating the use of AI in the insurance sector to ensure fair and safe deployment. Insurance applications are considered high-risk due to their significant impacts on consumers' lives. Multiple laws with various approaches have been proposed to address and mitigate bias and increase transparency. Existing laws also apply to AI, and the regulatory landscape is rapidly evolving. Several US laws have been implemented or proposed to regulate insurance.

Balancing Creativity and Regulation: The EU AI Act’s Impact on Generative AI

Generative AI is a rapidly expanding field of AI technology that involves creating new content (such as images, text, audio, or other forms of synthetic content) using large datasets and complex algorithms. However, with the enactment of the EU AI Act, generative AI developers are now subject to strict regulatory scrutiny that imposes transparency obligations and additional requirements for high-risk or general-purpose AI models. These obligations include labeling artificially generated content, disclosing deep fake and AI-generated text content, informing natural persons of the existence of AI systems, and complying with copyright laws. Generative AI developers must carefully evaluate and adapt to these requirements to maximize compliance with the EU AI Act.