March 2024

Horizon Scan: The Key AI Laws Targeting Insurance You Need to Know in the US

American policymakers are increasingly regulating the use of AI in the insurance sector to ensure fair and safe deployment. Insurance applications are considered high-risk due to their significant impacts on consumers' lives. Multiple laws with various approaches have been proposed to address and mitigate bias and increase transparency. Existing laws also apply to AI, and the regulatory landscape is rapidly evolving. Several US laws have been implemented or proposed to regulate insurance.

Balancing Creativity and Regulation: The EU AI Act’s Impact on Generative AI

Generative AI is a rapidly expanding field of AI technology that involves creating new content (such as images, text, audio, or other forms of synthetic content) using large datasets and complex algorithms. However, with the enactment of the EU AI Act, generative AI developers are now subject to strict regulatory scrutiny that imposes transparency obligations and additional requirements for high-risk or general-purpose AI models. These obligations include labeling artificially generated content, disclosing deep fake and AI-generated text content, informing natural persons of the existence of AI systems, and complying with copyright laws. Generative AI developers must carefully evaluate and adapt to these requirements to maximize compliance with the EU AI Act.

AI and ESG: Harnessing AI for Sustainable Practices

AI has both positive and negative implications on the environment. While the technology uses vast amounts of energy, it offers ways to expand sustainable practices if its power is harnessed in the right way. AI developers can reduce their environmental impact by using efficient hardware, reducing inference time, and locating data centers in cleaner energy regions. Opting for single-purpose LLMs for specific tasks and increasing transparency on measurements of energy output can also help. Despite the high use of energy, AI can expand sustainable practices and help achieve the UN Sustainable Development goals. Regulation targeting AI’s environmental impact is being developed in the US and EU. Large companies have also announced initiatives to tackle sustainability. Various jurisdictions globally have begun to develop regulation to address AI's environmental impacts.

The EU AI Act and General Purpose AI Systems: What you need to know

The EU AI Act imposes distinct and stringent obligations on providers of general-purpose AI (GPAI) models due to their adaptability and potential systemic risks. GPAI models are defined by their broad functionality and ability to perform various tasks without domain-specific tuning. GPAI models with high-impact capabilities are designated as GPAI models with systemic risk (GPAISR) and subject to additional obligations for risk management and cybersecurity. The Act allows for exemptions for free and open licenses, while GPAISR providers can rely on codes of practice for compliance until harmonized EU standards are established. The rules on GPAI models are expected to become applicable 12 months after the enforcement of the Act.

EU AI Act approved by the European Parliament

The European Parliament has approved the EU AI Act, but it still needs approval from the Council of the European Union. The Act must undergo further scrutiny before becoming law and will be published in the Official Journal of the EU before becoming enforceable. The application of the Act's provisions will be phased, with some provisions likely to apply before the end of this year. Businesses should start preparing for the Act's enforcement.