July 2024

What you need to know about the Framework Convention on AI Human Rights, Democracy and the Rule of Law

The EU AI Act is not the first initiative to regulate AI, and in 2019, the Council of Europe established the Ad Hoc Committee on Artificial Intelligence, later known as the Committee on Artificial Intelligence (CAI). The CAI presented the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was officially adopted as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the CoE. The FCAI establishes a global legal framework for AI governance, outlining general principles and rules for regulating AI activities. It is intended to address the risks associated with AI systems, particularly concerning human rights, democracy, and the rule of law. The FCAI introduces seven fundamental principles that must be observed throughout the lifecycle of AI systems and mandates Contracting States to establish measures to identify, assess, prevent, and mitigate risks associated with AI systems. The FCAI will become effective after certain procedural steps are finalized, opening for signature on September 5th, 2024.

The EU AI Act Published in the Official Journal of the EU

The EU AI Act, which categorizes AI systems into various risk levels and imposes specific requirements and obligations, was published in the Official Journal of the EU. The Act will become gradually enforceable over time, with prohibited practices taking effect six months after entry into force. Organizations should prepare for compliance, and resources are available to help understand and navigate the Act's provisions.

Unveiling the Curtain of AI: AI Act and Transparency

Transparency is a key principle in frameworks designed to ensure safe and reliable development of AI. The EU AI Act emphasizes transparency requirements with a multi-pronged approach for different AI systems. High-risk AI systems have stringent transparency obligations and specific transparency obligations have been imposed on AI systems with certain functions such as direct interaction with individuals. The AI Act also imposes severe monetary penalties for non-compliance. Companies must prepare early for compliance with the AI Act.

How can AI work towards an effective and just “green transition” in the mining industry?

The "green transition" towards renewable energy and industrial processes will require a significant increase in the consumption of metals and minerals, which will involve expanding mining operations. The article explores the use of AI to extract valuable resources and mitigate the negative impacts of mining on local communities. AI can aid companies and governments in monitoring mine site violations, issues in mineral processing, and raw material supply chains. An interactive Multi Objective Optimization (iMOO) approach that captures the linkages between investment decisions and environmental, social, and economic outcomes is being developed and piloted. By including outcomes along environmental and social criteria, iMOO can promote more sustainable decision making, while allowing for the consideration of complex data and its linkages. However, information equality and the transparency of those results and access to these tools for companies, communities, and governments is critical. Computing the Pareto fronts that include social and environmental sustainability criteria helps communication among decision-makers and can help in addressing ethical concerns around AI use.

Conformity Assessments in the EU AI Act: What You Need to Know

The EU AI Act introduces a risk-based regulatory framework for AI governance and mandates conformity assessments for high-risk AI systems. Providers may choose between internal or external assessment, but external assessment is mandatory under certain conditions. Conformity assessments must be combined with other obligations, such as issuance of a certificate, declaration of conformity, CE marking, and registration in the EU database. If a high-risk AI system becomes non-compliant after marketing, corrective actions must be taken. Delegated acts may be introduced by the Commission for conformity assessments. Holistic AI can help enterprises adapt and comply with AI regulation.