March 2024

25 Mar 2024
The National Institute for Standards and Technology (NIST) has released a voluntary risk management framework called the AI Risk Management Framework (AI RMF) to help organizations manage the risks associated with AI systems. The framework is adaptable to organizations of all sizes and includes four specific functions: Govern, Map, Measure, and Manage. The AI RMF also emphasizes four key themes: Adaptability, Accountability, Diversity, and Iteration. The framework is a resource for organizations who design, develop, deploy, or use AI systems and was developed following an 18-month consultation process with private and public sector groups.

American policymakers are increasingly regulating the use of AI in the insurance sector to ensure fair and safe deployment. Insurance applications are considered high-risk due to their significant impacts on consumers' lives. Multiple laws with various approaches have been proposed to address and mitigate bias and increase transparency. Existing laws also apply to AI, and the regulatory landscape is rapidly evolving. Several US laws have been implemented or proposed to regulate insurance.

Generative AI is a rapidly expanding field of AI technology that involves creating new content (such as images, text, audio, or other forms of synthetic content) using large datasets and complex algorithms. However, with the enactment of the EU AI Act, generative AI developers are now subject to strict regulatory scrutiny that imposes transparency obligations and additional requirements for high-risk or general-purpose AI models. These obligations include labeling artificially generated content, disclosing deep fake and AI-generated text content, informing natural persons of the existence of AI systems, and complying with copyright laws. Generative AI developers must carefully evaluate and adapt to these requirements to maximize compliance with the EU AI Act.

AI has both positive and negative implications on the environment. While the technology uses vast amounts of energy, it offers ways to expand sustainable practices if its power is harnessed in the right way. AI developers can reduce their environmental impact by using efficient hardware, reducing inference time, and locating data centers in cleaner energy regions. Opting for single-purpose LLMs for specific tasks and increasing transparency on measurements of energy output can also help. Despite the high use of energy, AI can expand sustainable practices and help achieve the UN Sustainable Development goals. Regulation targeting AI’s environmental impact is being developed in the US and EU. Large companies have also announced initiatives to tackle sustainability. Various jurisdictions globally have begun to develop regulation to address AI's environmental impacts.

The EU AI Act imposes distinct and stringent obligations on providers of general-purpose AI (GPAI) models due to their adaptability and potential systemic risks. GPAI models are defined by their broad functionality and ability to perform various tasks without domain-specific tuning. GPAI models with high-impact capabilities are designated as GPAI models with systemic risk (GPAISR) and subject to additional obligations for risk management and cybersecurity. The Act allows for exemptions for free and open licenses, while GPAISR providers can rely on codes of practice for compliance until harmonized EU standards are established. The rules on GPAI models are expected to become applicable 12 months after the enforcement of the Act.