March 2024
The growing use of Artificial Intelligence (AI) models, particularly Large Language Models (LLMs), has significant environmental implications due to the high amount of energy required for computing power. Emissions related to the IT sector, including data centers, cryptocurrency, and AI, are set to sharply increase after 2023, with AI projected to consume the energy equivalent of a country like Argentina or the Netherlands by 2027. The manufacturing of chips, the training phase, and the live computing LLMs perform to generate predictions or responses contribute significantly to their environmental impact. This issue is a growing concern for society, manufacturers, developers, and policymakers who must work together to mitigate AI's high energy usage.
Singapore aims to boost its AI capabilities and become a global leader in AI advancements, with a focus on three main areas: Activity Drivers, People and Communities, and Infrastructure and Environment. The government will allocate SG$1 billion (about US$743 million) over the next five years to foster AI growth, attract top talent, and strengthen AI infrastructure and governance frameworks. The strategy includes initiatives to support industry, government, and research, as well as AI talent acquisition and upskilling, and the creation of physical space for AI activities. Singapore also aims to establish a trusted environment for AI through the institutionalization of governance and security frameworks.
The EU AI Act is the first comprehensive legal framework governing AI use across different applications, with a risk-based approach for different AI systems. It includes entities based in the EU and organizations that employ AI in interactions with EU residents. AI systems are classified as prohibited, high-risk, or minimal risk, with general-purpose AI (GPAI) models subject to further assessment and different obligations. There are design-related requirements for high-risk AI systems, and transparency obligations for limited risk AI systems. Non-compliance with the Act carries significant penalties. It is crucial for organizations to determine their system's classification and establish a risk management framework to prepare for the Act.
Organizations are increasingly investing in AI tools and systems, but the risks associated with them can cause major harms if appropriate business practices and safeguards are not put in place. Many AI systems fall under the scope of existing laws and the cost of non-compliance can be very high if an organization is sanctioned. Compliance with existing and AI-specific laws is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws, with the majority of the penalties issued in the EU where authorities have cracked down on the processing of data by AI systems under the GDPR. The US has also taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use. China has also started to crack down on AI misuse, due to the recent enactment of multiple laws relating to AI. It is essential to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.
President Biden has issued the Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. The order addresses concerns related to cybersecurity, national security, and privacy from mounting threats that the US is facing as adversaries seek unauthorized access to vast stores of sensitive personal and governmental data. The order imposes prohibitions or limitations on transactions involving the processing and exploitation of sensitive data by foreign adversaries, affecting the development, monitoring, and deployment of AI systems that depend on processing bulk sensitive personal data and government-related data. Furthermore, the order tasks key officials with formulating recommendations to identify, evaluate, and neutralize national security threats resulting from past data transfers, including healthcare data and human 'omic data.