July 2024

Do not pass go: European Commission’s investigations into monopolies under the Digital Markets Act

Europe is leading the way in regulating digital platforms with its trio of laws: The Digital Markets Act (DMA), Digital Services Act (DSA), and EU AI Act. The DMA and DSA are in effect and aim to ensure digital market competitiveness, fairness and prevent monopolies, while the EU AI Act imposes stringent obligations for risky AI systems. Gatekeepers who fail to comply with the DMA's rules risk hefty fines of up to 10% of their total worldwide annual turnover or up to 20% if they repeatedly offend. The European Commission has already initiated compliance investigations into designated gatekeepers Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Noncompliance investigations have been launched into Alphabet, Apple, and Meta over concerns that they are breaching the DMA's rules. Apple is reportedly blocking the release of Apple Intelligence in the EU due to concerns over DMA compliance.

What you need to know about the Framework Convention on AI Human Rights, Democracy and the Rule of Law

The EU AI Act is not the first initiative to regulate AI, and in 2019, the Council of Europe established the Ad Hoc Committee on Artificial Intelligence, later known as the Committee on Artificial Intelligence (CAI). The CAI presented the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was officially adopted as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the CoE. The FCAI establishes a global legal framework for AI governance, outlining general principles and rules for regulating AI activities. It is intended to address the risks associated with AI systems, particularly concerning human rights, democracy, and the rule of law. The FCAI introduces seven fundamental principles that must be observed throughout the lifecycle of AI systems and mandates Contracting States to establish measures to identify, assess, prevent, and mitigate risks associated with AI systems. The FCAI will become effective after certain procedural steps are finalized, opening for signature on September 5th, 2024.

The EU AI Act Published in the Official Journal of the EU

The EU AI Act, which categorizes AI systems into various risk levels and outlines specific requirements and obligations, has been published in the Official Journal of the EU. The Act's implementation will be phased, with provisions concerning prohibited practices taking effect six months after the Act's entry into force. Compliance preparation is emphasized, and Holistic AI offers assistance in preparing for the AI Act.

Unveiling the Curtain of AI: AI Act and Transparency

Transparency is a key principle in frameworks designed to ensure safe and reliable development of AI. The EU AI Act emphasizes transparency requirements with a multi-pronged approach for different AI systems. High-risk AI systems have stringent transparency obligations and specific transparency obligations have been imposed on AI systems with certain functions such as direct interaction with individuals. The AI Act also imposes severe monetary penalties for non-compliance. Companies must prepare early for compliance with the AI Act.

How can AI work towards an effective and just “green transition” in the mining industry?

The "green transition" towards renewable energy and industrial processes will require a significant increase in the consumption of metals and minerals, which will involve expanding mining operations. The article explores the use of AI to extract valuable resources and mitigate the negative impacts of mining on local communities. AI can aid companies and governments in monitoring mine site violations, issues in mineral processing, and raw material supply chains. An interactive Multi Objective Optimization (iMOO) approach that captures the linkages between investment decisions and environmental, social, and economic outcomes is being developed and piloted. By including outcomes along environmental and social criteria, iMOO can promote more sustainable decision making, while allowing for the consideration of complex data and its linkages. However, information equality and the transparency of those results and access to these tools for companies, communities, and governments is critical. Computing the Pareto fronts that include social and environmental sustainability criteria helps communication among decision-makers and can help in addressing ethical concerns around AI use.