January 2024

Traversing AI regulation’s international terrain: Understanding the Council of Europe’s Draft Framework Convention

The Council of Europe has published a Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (DFC), which sets out principles and norms for AI aligned with human rights, democracy, and the rule of law. The Convention's primary objective is to ensure AI systems uphold these values throughout their entire lifecycle. The DFC aligns with the OECD by adopting the same definition for “AI system,” which is significant for clarity and consistency in international AI discourse and regulation. The Framework Convention does not classify specific use of AI systems as prohibited or high-risk systems but rather handles this issue at the level of the scope by covering all AI systems “that have potential to interfere with human rights, democracy, and the rule of law” and requiring appropriate risk assessment and mitigation measures to be implemented for all of them. The DFC does not specify what oversight mechanisms should be used but requires each party to establish or designate at least one effective mechanism to oversee compliance with the DFC. The enforcement of the DFC is multifaceted and involves a combination of national implementation, international cooperation, and a follow-up mechanism for oversight and consultation.

How are California, and Florida leading the race to regulate AI in 2024?

Several important AI laws were passed in 2023, including the EU AI Act, New York City Local Law 144, and Colorado's SB-169. In 2024, various US states, such as California and Florida, are introducing new AI laws to regulate AI and increase transparency. The EU AI Act is expected to be finalized this year, while compliance with AI regulations is becoming increasingly important. Holistic AI's Global Tracker can help organizations stay up-to-date on global AI initiatives and gain a competitive advantage. This article is for informational purposes only and does not provide legal advice.

The State of AI Regulations in 2024

In 2024, the EU AI Act will be established and risk management frameworks will be put in place. In the US, regulatory bodies and case law will target companies that allow algorithmic discrimination or intentionally use bad data. To help businesses navigate this complex landscape, an ebook called "The State of Global AI Regulations in 2024" has been created, providing an overview of the current global AI regulations, their potential impact on businesses, and where they may be headed next. The ebook can be downloaded via a provided link.

December 2023

New York follows in Colorado's footsteps to regulate Insurtech

Colorado SB-169 and New York's A08369 are laws aimed at protecting consumers from unfair discrimination in insurance practices by restricting insurers' use of external consumer data, algorithms, and predictive models. The laws define external customer data and information sources, algorithms, and predictive models, and prohibit their use if they result in disproportionately negative outcomes for protected classifications such as race, ethnicity, gender, and more. The laws also require specific rules for the types of insurance and insurance practices, as well as a risk management framework and ongoing monitoring. The superintendent or commissioner of insurance is responsible for developing these rules and conducting stakeholder consultations and investigations. Commercial policies, bonds executed by qualified surety, and title insurance are exempted from these laws. Insurtech is facing increasing regulation globally, and businesses must stay compliant to gain a competitive edge.

ISO/IEC 42001:2023 – AI Standard on Establishing, Maintaining and Improving AI Management Systems Published

The International Standardisation Organisation (ISO) and the International Electrotechnical Commission (IEC) have published a voluntary standard, ISO/IEC 42001, that establishes a process for implementing AI Management Systems (AIMS) in organisations. ISO/IEC 42001 is the first process standard to outline a comprehensive governance framework for responsible and trustworthy deployment of AI systems. The standard is scalable and offers auditability, providing a foundation for external entities to certify and audit AI systems in line with the risk assessment framework. Policymakers behind the EU AI Act and legislative efforts on AI in the US may adopt the standard’s objectives and modalities to guide their regional standardisation efforts. Holistic AI, a regulatory compliance solutions provider, offers services to assist organisations in complying with applicable AI regulations and industry standards.