March 2024

AI and ESG: Understanding the Environmental Impact of AI and LLMs

The growing use of Artificial Intelligence (AI) models, particularly Large Language Models (LLMs), has significant environmental implications due to the high amount of energy required for computing power. Emissions related to the IT sector, including data centers, cryptocurrency, and AI, are set to sharply increase after 2023, with AI projected to consume the energy equivalent of a country like Argentina or the Netherlands by 2027. The manufacturing of chips, the training phase, and the live computing LLMs perform to generate predictions or responses contribute significantly to their environmental impact. This issue is a growing concern for society, manufacturers, developers, and policymakers who must work together to mitigate AI's high energy usage.

Singapore’s National AI Strategy 2.0: What you need to know

Singapore aims to boost its AI capabilities and become a global leader in AI advancements, with a focus on three main areas: Activity Drivers, People and Communities, and Infrastructure and Environment. The government will allocate SG$1 billion (about US$743 million) over the next five years to foster AI growth, attract top talent, and strengthen AI infrastructure and governance frameworks. The strategy includes initiatives to support industry, government, and research, as well as AI talent acquisition and upskilling, and the creation of physical space for AI activities. Singapore also aims to establish a trusted environment for AI through the institutionalization of governance and security frameworks.

How to Identify High-Risk AI Systems According to the EU AI Act

The EU AI Act, which is the world's first comprehensive legal framework governing AI across use cases, aims to protect fundamental rights and prevent harm by regulating AI use within the European Union. It categorizes AI systems and assigns different responsibilities to different parties based on their risk level. Prohibited AI systems are banned, high-risk systems are subject to rigorous design and operational requirements, while minimal-risk systems are not subjected to any mandatory regulatory framework. The Act also delineates transparency obligations for limited-risk AI systems. Furthermore, GPAI models have a dedicated chapter in the Act, and providers of such models are subject to additional and more rigorous technical requirements if their models carry systemic risk. Failure to comply with the Act's provisions can result in hefty penalties depending on the role of the infringer and the seriousness of the infringement. It is crucial for organizations to determine the level of regulatory risk their systems pose and develop a risk management framework to prevent potential future harm.

The high cost of non-compliance: Penalties issued for AI under existing laws

Organizations are increasingly investing in AI tools and systems, but the risks associated with them can cause major harms if appropriate business practices and safeguards are not put in place. Many AI systems fall under the scope of existing laws and the cost of non-compliance can be very high if an organization is sanctioned. Compliance with existing and AI-specific laws is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws, with the majority of the penalties issued in the EU where authorities have cracked down on the processing of data by AI systems under the GDPR. The US has also taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use. China has also started to crack down on AI misuse, due to the recent enactment of multiple laws relating to AI. It is essential to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.

Securing the Digital Realm from Foreign Actors through Data Rules: Unlocking Biden's Executive Order on Personal Data and National Security

President Biden has issued the Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. The order addresses concerns related to cybersecurity, national security, and privacy from mounting threats that the US is facing as adversaries seek unauthorized access to vast stores of sensitive personal and governmental data. The order imposes prohibitions or limitations on transactions involving the processing and exploitation of sensitive data by foreign adversaries, affecting the development, monitoring, and deployment of AI systems that depend on processing bulk sensitive personal data and government-related data. Furthermore, the order tasks key officials with formulating recommendations to identify, evaluate, and neutralize national security threats resulting from past data transfers, including healthcare data and human 'omic data.