March 2024

The EU AI Act imposes distinct and stringent obligations on providers of general-purpose AI (GPAI) models due to their adaptability and potential systemic risks. GPAI models are defined by their broad functionality and ability to perform various tasks without domain-specific tuning. GPAI models with high-impact capabilities are designated as GPAI models with systemic risk (GPAISR) and subject to additional obligations for risk management and cybersecurity. The Act allows for exemptions for free and open licenses, while GPAISR providers can rely on codes of practice for compliance until harmonized EU standards are established. The rules on GPAI models are expected to become applicable 12 months after the enforcement of the Act.

The European Parliament has approved the EU AI Act, but it still needs approval from the Council of the European Union. The Act must undergo further scrutiny before becoming law and will be published in the Official Journal of the EU before becoming enforceable. The application of the Act's provisions will be phased, with some provisions likely to apply before the end of this year. Businesses should start preparing for the Act's enforcement.

The growing use of Artificial Intelligence (AI) models, particularly Large Language Models (LLMs), has significant environmental implications due to the high amount of energy required for computing power. Emissions related to the IT sector, including data centers, cryptocurrency, and AI, are set to sharply increase after 2023, with AI projected to consume the energy equivalent of a country like Argentina or the Netherlands by 2027. The manufacturing of chips, the training phase, and the live computing LLMs perform to generate predictions or responses contribute significantly to their environmental impact. This issue is a growing concern for society, manufacturers, developers, and policymakers who must work together to mitigate AI's high energy usage.

Singapore aims to boost its AI capabilities and become a global leader in AI advancements, with a focus on three main areas: Activity Drivers, People and Communities, and Infrastructure and Environment. The government will allocate SG$1 billion (about US$743 million) over the next five years to foster AI growth, attract top talent, and strengthen AI infrastructure and governance frameworks. The strategy includes initiatives to support industry, government, and research, as well as AI talent acquisition and upskilling, and the creation of physical space for AI activities. Singapore also aims to establish a trusted environment for AI through the institutionalization of governance and security frameworks.

The EU AI Act is the first comprehensive legal framework governing AI use across different applications, with a risk-based approach for different AI systems. It includes entities based in the EU and organizations that employ AI in interactions with EU residents. AI systems are classified as prohibited, high-risk, or minimal risk, with general-purpose AI (GPAI) models subject to further assessment and different obligations. There are design-related requirements for high-risk AI systems, and transparency obligations for limited risk AI systems. Non-compliance with the Act carries significant penalties. It is crucial for organizations to determine their system's classification and establish a risk management framework to prepare for the Act.