January 2024
The Federal Artificial Intelligence Risk Management Act of 2024 has been introduced in the US Congress, requiring federal agencies to comply with the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework, designed to help organizations prevent, detect, mitigate, and manage AI risks, sets out four key processes including mapping, measuring, managing, and governing. The Act also includes guidance for agencies on incorporating the AI RMF, reporting requirements, and regulations on AI acquisition. Compliance with NIST’s AI Risk Management Framework may soon become a legal requirement, as several states and federal laws already draw on it.
Lawmakers in the US are proposing laws to impose requirements on the use of automated employment decision tools (AEDTs) to reduce potential risks and mitigate bias and discrimination. New York City Local Law 144 set the precedent for bias audits on AEDTs, with Pennsylvania, New Jersey and New York State following suit with similar laws. The laws require independent, annual bias audits of AEDTs and the publication of a summary of the results on the employer or employment agency's website. Notifications to candidates and employees before the use of AEDTs are also mandated by the laws, with varying approaches to obtaining consent.
The Council of Europe has published a Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (DFC), which sets out principles and norms for AI aligned with human rights, democracy, and the rule of law. The Convention's primary objective is to ensure AI systems uphold these values throughout their entire lifecycle. The DFC aligns with the OECD by adopting the same definition for “AI system,” which is significant for clarity and consistency in international AI discourse and regulation. The Framework Convention does not classify specific use of AI systems as prohibited or high-risk systems but rather handles this issue at the level of the scope by covering all AI systems “that have potential to interfere with human rights, democracy, and the rule of law” and requiring appropriate risk assessment and mitigation measures to be implemented for all of them. The DFC does not specify what oversight mechanisms should be used but requires each party to establish or designate at least one effective mechanism to oversee compliance with the DFC. The enforcement of the DFC is multifaceted and involves a combination of national implementation, international cooperation, and a follow-up mechanism for oversight and consultation.
Several important AI laws were passed in 2023, including the EU AI Act, New York City Local Law 144, and Colorado's SB-169. In 2024, various US states, such as California and Florida, are introducing new AI laws to regulate AI and increase transparency. The EU AI Act is expected to be finalized this year, while compliance with AI regulations is becoming increasingly important. Holistic AI's Global Tracker can help organizations stay up-to-date on global AI initiatives and gain a competitive advantage. This article is for informational purposes only and does not provide legal advice.
05 Jan 2024
In 2024, the EU AI Act will be established and risk management frameworks will be put in place. In the US, regulatory bodies and case law will target companies that allow algorithmic discrimination or intentionally use bad data. To help businesses navigate this complex landscape, an ebook called "The State of Global AI Regulations in 2024" has been created, providing an overview of the current global AI regulations, their potential impact on businesses, and where they may be headed next. The ebook can be downloaded via a provided link.