About

The Holistic AI Tracker was created to fill a gap in the market for an all-in-one database on AI developments. The Tracker is a central repository for global AI legislation, regulations, legal actions, penalties, incidents, and more, and contains almost 3000 events with the number growing daily. We actively monitor cutting-edge developments in this fast-evolving field, to help professionals, researchers, policy specialists, and technical teams better understand the AI risk landscape, navigate compliance requirements across different use cases and jurisdictions with confidence, strengthen trust, and gain a competitive advantage.

The Holistic AI Tracker comprises of two main modules: the Atlas and the Feed. The Atlas is an interactive heat-mapped resource that allows you to navigate developments by jurisdiction. Zero in on specific countries – or, in the case of the US, individual states – track the passage of selected bills, and filter all events by legislation status and sector. The Feed contains insights authored by Holistic AI's policy team as well as a community of experts in the AI field, to help you stay on top of the latest policy developments across industries and markets.

By providing immediate awareness of regulatory changes and developments, the Holistic AI Tracker enables organizations to swiftly adjust compliance strategies and respond to new challenges in a dynamic environment, ensuring that organizations stay up to date with the evolving global AI governance landscape.

Key Terms & Definitions

The Atlas covers eight distinct categories that cover various aspects of AI governance, ranging from legislative actions to specific incidents involving AI systems.

  • Legislations, encompasses both draft and enacted laws aimed at regulating AI and related technologies, providing a legal framework for their development and use.
  • Guidance refers to non-binding advice or recommendations that offer implementation strategies and national policies for ethical AI systems, helping entities align with best practices.
  • Standards, consisting of guidelines, principles, and technical specifications set by Standards Development Organizations (SDOs). These standards are designed to ensure the safe, efficient, and interoperable deployment and management of AI systems.
  • Regulations, detailing specific statutory requirements issued by regulatory bodies to enforce compliance in the AI sector.
  • Legal Actions cover disputes involving AI technologies, such as lawsuits, class actions, and complaints, reflecting the legal challenges that arise in this rapidly evolving field.
  • Penalties refer to sanctions imposed on entities for violating AI-related regulations, serving as penalties for non-compliance.
  • Incidents document events where AI systems or algorithms had a negative impact, even if they did not cause actual harm, highlighting the risks associated with AI implementation. An AI incident refers to events where the development, use, or malfunction of AI systems causes harm. This harm can be reputational, financial, physical, psychological, environmental, or related to human rights. The impacts can be assessed through various verticals, including bias, privacy, efficacy, copyright infringement, robustness, transparency, explainability, abuse, psychological harm, physical safety, and research.
  • Investigations launched by enforcement bodies and regulators to gather evidence following suspicions of non-compliance, which could lead to legal actions and/or a penalty.

These categories collectively provide a comprehensive overview of the regulatory landscape and the various governance mechanisms in place to oversee AI technologies.

Source Type

The Atlas draws on a wide range of source types to ensure a comprehensive and multi-faceted view of AI governance and its implications. These sources include congressional records, the federal register, and legislature databases for the most reliable sources on legislation, as well as reliable legal and news outlets for the most accurate and up to date information on AI incidents. This diverse array of sources allows the Atlas to present a well-rounded and informed analysis of the evolving landscape of AI governance. Further, the Holistic AI policy team sits on many working groups across organizations such as the OECD and we are part of NISTs AI Safety Institute Consortium, Advancing efforts for trustworthy AI.

Inclusion Criteria

Our inclusion criteria for the Holistic AI Tracker focus on legislation and regulations that impose obligations on AI systems and related technologies, such as automated decision systems and automated decision tools. The Tracker includes measures aimed at directly regulating AI to reduce risks. However, it does not cover events that merely mention AI without directly regulating its use, such as AI applications in a federal cost savings plan. Our methodology is constantly evolving. If you believe we're missing information or have any suggestions for us, please submit feedback, or email us at tracker@holistiai.com