April 2024

How can AI enhance ESG Reporting?

Environmental, social, and governance (ESG) standards are becoming increasingly important for investors, consumers, and regulators. However, achieving a strong ESG reporting practice is challenging due to a data sourcing problem and a dynamic regulatory environment globally. Companies are turning to Large Language Models (LLMs) for potential solutions to overcome these challenges. LLMs can help with data collection and analysis, better ratings, and estimates and predictions. Regulations on ESG reporting have increased globally, and there is also increasing focus on the environmental implications of AI, as well as the potential of AI to benefit ESG. To apply AI safely, appropriate guidelines and guardrails must be in place.

OMB Policy on Governance, Innovation, and Risk Management

The White House Office of Management and Budget (OMB) has issued a policy on advancing governance, innovation, and risk management for agency use of artificial intelligence (AI). The policy directs federal agencies to promote AI innovation and governance while managing the risks associated with the technology, especially those impacting the safety and rights of the American public. This aligns with President Biden’s executive order on AI and creates a balance between responsibility and innovation while making federal agencies accountable for the development and risk management of AI. The policy outlines several AI governance, innovation, and risk management actions for federal agencies, including managing risks related to the procurement of AI, addressing risks from the use of AI, expanding transparency of AI use, advancing responsible AI innovation, and growing the AI workforce. Alongside the policy, measures such as increasing reporting on the 2024 Federal AI Use Case Inventory and issuing a Request for Information (RFI) on the responsible procurement of AI in government have been introduced to promote responsible AI use.

March 2024

EU AI Act and Sustainability: Environmental Provisions in the EU AI Act

The upcoming EU AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while establishing Europe as a leader in AI. The Act includes provisions concerning the environmental impact and energy consumption of AI systems, such as improving the resource performance of AI systems, reporting and documenting energy consumption, and encouraging compliance with environmental sustainability rules. The Act also establishes regulatory sandboxes to promote innovation under specific conditions, including high-level protection of the environment and energy sustainability. The EU AI Office and Member States will work together to draw up codes of conduct for voluntary application of specific requirements, including minimizing the impact of AI systems on environmental sustainability. The Act also requires regular evaluation and review of environmental provisions, including standardization deliverables and voluntary codes of conduct. Providers of general-purpose AI models must provide detailed information on computational resources used for training and energy consumption. The Act anticipates the fast-paced advancements in AI and allows for exemptions to conformity assessments in specific situations that ensure environmental protection and benefit society overall. Compliance requires a proactive, iterative approach.

How Colorado is Regulating Insurtech with SB21-169

Colorado's Senate Bill 21-169, which seeks to prevent unfair discrimination in insurance practices through the use of external customer data or algorithms, was adopted on 6 July 2021 and came into effect on 1 January 2023. The law requires the Commissioner on insurance to develop specific rules for different types of insurance and insurance practices in collaboration with relevant stakeholders. Rules have already been adopted for life insurance, with a framework that requires life insurers to establish a risk-based governance and risk management framework to support policies, procedures, and systems to determine whether the use of external customer data or predictive models could result in unfair discrimination. Rules are still being developed for private passenger auto insurance, while the consultation process is underway for health insurance. Insurers must provide reports to the Division summarizing the results of testing conducted annually from 1 April 2024.

NIST AI RMF Core Explained

The National Institute for Standards and Technology (NIST) has released a voluntary risk management framework called the AI Risk Management Framework (AI RMF) to help organizations manage the risks associated with AI systems. The framework is adaptable to organizations of all sizes and includes four specific functions: Govern, Map, Measure, and Manage. The AI RMF also emphasizes four key themes: Adaptability, Accountability, Diversity, and Iteration. The framework is a resource for organizations who design, develop, deploy, or use AI systems and was developed following an 18-month consultation process with private and public sector groups.