May 2024

AI and IP at the Crossroads: How does the EU AI Act Approach Copyright Law?

The development of artificial intelligence (AI) technologies relies heavily on data. However, the use of copyrighted materials in training and testing AI systems may raise copyright infringement concerns. The EU AI Act introduces no exceptions to current copyright law but imposes obligations on providers of general-purpose AI models to comply with copyright protection. These obligations include establishing copyright policies and providing detailed summaries of training data used in the development of the AI model. The AI Office is tasked with developing guidelines and monitoring compliance with copyright obligations. EU copyright law provides several exceptions to copyright protection, including exemptions for text and data mining activities. Still, non-compliant operators may face harsh penalties under the Act and copyright claims under EU copyright law.

Why are copyright laws relevant to AI?

Generative Artificial Intelligence (GenAI) has raised numerous legal questions, particularly in the realm of copyright law as GenAI systems become increasingly sophisticated in replicating and generating content. The use of copyrighted material for training AI is a central concern, with lawsuits proliferating as a consequence. There are laws in several countries that could potentially permit the use of copyrighted materials to train AI. The ongoing legal battles confront ethical questions, such as whether training a model on copyrighted material requires a license, whether generative AI output infringes on the copyright of the materials on which the model was trained, and whether liability for copyright infringement stems from generative AI. The lawsuits and legal questions underscore the complex and evolving nature of copyright law in the era of AI technology. Organizations need to stay on top of developments to remain compliant with the fast-moving legal landscape.

Colorado passes law enacting consumer protections for AI

Colorado has passed a law to protect consumers in their interactions with AI systems, SB24-205, which mandates regulation of high-risk AI systems and prevents algorithmic discrimination. Developers of high-risk systems must take reasonable precautions to prevent algorithmic discrimination, such as disclosing information and conducting impact assessments. High-risk system deployers must implement risk management protocols, conduct impact assessments, and provide consumers opportunities to correct incorrect data. The law applies to any person who does business in Colorado, and enforcement lies solely in the power of the Attorney General. The law provides defenses for developers or deployers if they comply with a nationally or internationally recognized AI risk management framework. There are no specified penalties for violations, and compliance can be maximized with Holistic AI's Governance Platform.

Towards International Cooperation on Responsible AI

Governments around the world are increasingly coming together to address challenges posed by the development of AI systems. Discussions on safety, security, trustworthiness, and responsible development are at the forefront of these efforts. Notable international developments include the US and UK partnership to develop tests for advanced AI models, the United Nations General Assembly's adoption of its first landmark resolution on AI, the UK and Republic of Korea's upcoming AI Seoul Summit, joint international guidance on the secure deployment of AI systems, updates to the EU-US Terminology and Taxonomy for AI, and the US and China's agreement to develop a framework for the responsible development of AI. The growing importance of compliance in AI governance is also emphasized.

NIST’s AI Risk Management Framework Playbook: A Deep Dive

Enterprises are turning to voluntary frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), to reduce legal, reputational, and financial risk of their AI deployment. The AI RMF is a flexible framework that supports organizations using AI to manage the risks associated with it, encompassing four key functions of Govern, Map, Measure, and Manage. The Playbook serves as a practical companion to the AI RMF, offering actionable and adaptable guidance. The Govern function is critical to successful AI risk management, followed by the Map, Measure, and Manage functions. Each function includes suggested actions and recommended transparency and documentation practices. Prioritizing AI governance through AI risk management frameworks, such as the AI RMF, can increase trust and enhance ROI for AI systems.