July 2024

What you need to know about the Framework Convention on AI Human Rights, Democracy and the Rule of Law

The EU AI Act is not the first initiative to regulate AI, and in 2019, the Council of Europe established the Ad Hoc Committee on Artificial Intelligence, later known as the Committee on Artificial Intelligence (CAI). The CAI presented the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was officially adopted as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the CoE. The FCAI establishes a global legal framework for AI governance, outlining general principles and rules for regulating AI activities. It is intended to address the risks associated with AI systems, particularly concerning human rights, democracy, and the rule of law. The FCAI introduces seven fundamental principles that must be observed throughout the lifecycle of AI systems and mandates Contracting States to establish measures to identify, assess, prevent, and mitigate risks associated with AI systems. The FCAI will become effective after certain procedural steps are finalized, opening for signature on September 5th, 2024.

The EU AI Act Published in the Official Journal of the EU

The EU AI Act, which categorizes AI systems into various risk levels and imposes specific requirements and obligations, was published in the Official Journal of the EU. The Act will become gradually enforceable over time, with prohibited practices taking effect six months after entry into force. Organizations should prepare for compliance, and resources are available to help understand and navigate the Act's provisions.

Unveiling the Curtain of AI: AI Act and Transparency

Transparency is a key principle in frameworks designed to ensure safe and reliable development of AI. The EU AI Act emphasizes transparency requirements with a multi-pronged approach for different AI systems. High-risk AI systems have stringent transparency obligations and specific transparency obligations have been imposed on AI systems with certain functions such as direct interaction with individuals. The AI Act also imposes severe monetary penalties for non-compliance. Companies must prepare early for compliance with the AI Act.

Conformity Assessments in the EU AI Act: What You Need to Know

The EU AI Act introduces a risk-based regulatory framework for AI governance and mandates conformity assessments for high-risk AI systems. Providers may choose between internal or external assessment, but external assessment is mandatory under certain conditions. Conformity assessments must be combined with other obligations, such as issuance of a certificate, declaration of conformity, CE marking, and registration in the EU database. If a high-risk AI system becomes non-compliant after marketing, corrective actions must be taken. Delegated acts may be introduced by the Commission for conformity assessments. Holistic AI can help enterprises adapt and comply with AI regulation.

June 2024

Decoding the scope of the EU AI Act: How to identify if your organization is a covered entity

The EU AI Act will soon come into effect, and it is important for entities to understand whether they are within the scope of the law and how it applies to them. The first step is to determine if their AI system falls under the definitions and prohibitions given under the Act. They must also determine their role in the market, their geographical scope, and take note of the key enforcement dates. Compliance is crucial as non-compliance may result in harsh penalties. Holistic AI's expert team can support businesses in achieving compliance.