July 2024

The EU's AI Act and Digital Markets Act have the potential to positively influence competition dynamics by promoting transparency, accountability, and fair competition. AI technologies can positively impact market competition by enhancing efficiency and innovation, providing deeper insights and personalization, disrupting markets, and facilitating optimized pricing strategies. However, AI can also negatively impact competition dynamics by increasing market concentration, algorithmic collusion, abuse of dominance, and erecting barriers to entry for smaller firms. The AI Act's transparency and risk assessment requirements for AI systems could help reduce concerns regarding market concentration, while its provisions on information sharing with competition authorities could bolster competition law enforcement. The Digital Markets Act's regulations regarding self-preferencing, data usage, and access rights for business users and third parties may prevent technology giants from gaining unfair competitive advantages from AI technologies. The EU is also investigating competition in virtual worlds and generative AI systems, which may supplement efforts to apply EU competition rules in AI-related contexts. The AI Act and other regulatory frameworks highlight the importance of prioritizing AI Act readiness to mitigate potential negative impacts of AI usage on competition dynamics.

The EU AI Act aims to establish a trustworthy environment for AI within the EU market and employs a risk-based approach where obligations are proportionate to the risks posed by these systems. The Act outlines specific considerations for small and medium-sized enterprises (SMEs) and start-ups to support innovation among emerging players, such as giving them free access to regulatory sandboxes and being more lenient regarding documentation. The Act requires that conformity assessment fees be reduced in proportion to the size and market share of SMEs, and that guidelines and codes of conduct consider the interests and needs of SMEs. The Act also emphasizes the participation of SMEs in its governance structures and processes to ensure their views and interests are represented. Overall, compliance with the AI Act will be costly, but SMEs will benefit from initiatives and support to reduce financial burdens.

Europe is leading the way in regulating digital platforms with its trio of laws: The Digital Markets Act (DMA), Digital Services Act (DSA), and EU AI Act. The DMA and DSA are in effect and aim to ensure digital market competitiveness, fairness and prevent monopolies, while the EU AI Act imposes stringent obligations for risky AI systems. Gatekeepers who fail to comply with the DMA's rules risk hefty fines of up to 10% of their total worldwide annual turnover or up to 20% if they repeatedly offend. The European Commission has already initiated compliance investigations into designated gatekeepers Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Noncompliance investigations have been launched into Alphabet, Apple, and Meta over concerns that they are breaching the DMA's rules. Apple is reportedly blocking the release of Apple Intelligence in the EU due to concerns over DMA compliance.

The EU AI Act is not the first initiative to regulate AI, and in 2019, the Council of Europe established the Ad Hoc Committee on Artificial Intelligence, later known as the Committee on Artificial Intelligence (CAI). The CAI presented the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was officially adopted as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the CoE. The FCAI establishes a global legal framework for AI governance, outlining general principles and rules for regulating AI activities. It is intended to address the risks associated with AI systems, particularly concerning human rights, democracy, and the rule of law. The FCAI introduces seven fundamental principles that must be observed throughout the lifecycle of AI systems and mandates Contracting States to establish measures to identify, assess, prevent, and mitigate risks associated with AI systems. The FCAI will become effective after certain procedural steps are finalized, opening for signature on September 5th, 2024.

The EU AI Act, which categorizes AI systems into various risk levels and outlines specific requirements and obligations, has been published in the Official Journal of the EU. The Act's implementation will be phased, with provisions concerning prohibited practices taking effect six months after the Act's entry into force. Compliance preparation is emphasized, and Holistic AI offers assistance in preparing for the AI Act.