January 2024

Navigating the 2024 Online Safety Landscape: A Comparative Analysis of the EU’s Digital Services Act and UK’s Online Safety Act

Online safety is becoming increasingly important, particularly due to risks and hazards on social media platforms. Governments are enacting legislation to mitigate these risks, such as the EU's Digital Services Act (DSA) and the UK's Online Safety Act (OSA). The DSA focuses on establishing accountability and transparency, while the OSA emphasizes a "duty of care" and categorizes entities by reach and risk. Enforcement of the DSA involves Digital Services Coordinators and European Commission oversight, while the OSA is enforced by Ofcom. The DSA has a broader coverage of digital services and illegal content, while the OSA takes a more tailored approach to specific online harms. Both acts provide for fines and penalties for non-compliance and audits to ensure compliance. Companies must implement proactive and reactive strategies to ensure compliance with these regulations and reduce harm on their platforms. Holistic AI offers solutions for compliance obligations, including independent audits for VLOPs and VLOSEs under the DSA.

US Federal Artificial Intelligence Risk Management Act of 2024 introduced

The Federal Artificial Intelligence Risk Management Act of 2024 has been introduced in the US Congress, requiring federal agencies to comply with the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework, designed to help organizations prevent, detect, mitigate, and manage AI risks, sets out four key processes including mapping, measuring, managing, and governing. The Act also includes guidance for agencies on incorporating the AI RMF, reporting requirements, and regulations on AI acquisition. Compliance with NIST’s AI Risk Management Framework may soon become a legal requirement, as several states and federal laws already draw on it.

Bias audit laws in the US: The state of play for automated employment decision tools

Lawmakers in the US are proposing laws to impose requirements on the use of automated employment decision tools (AEDTs) to reduce potential risks and mitigate bias and discrimination. New York City Local Law 144 set the precedent for bias audits on AEDTs, with Pennsylvania, New Jersey and New York State following suit with similar laws. The laws require independent, annual bias audits of AEDTs and the publication of a summary of the results on the employer or employment agency's website. Notifications to candidates and employees before the use of AEDTs are also mandated by the laws, with varying approaches to obtaining consent.

Traversing AI regulation’s international terrain: Understanding the Council of Europe’s Draft Framework Convention

The Council of Europe has published a Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (DFC), which sets out principles and norms for AI aligned with human rights, democracy, and the rule of law. The Convention's primary objective is to ensure AI systems uphold these values throughout their entire lifecycle. The DFC aligns with the OECD by adopting the same definition for “AI system,” which is significant for clarity and consistency in international AI discourse and regulation. The Framework Convention does not classify specific use of AI systems as prohibited or high-risk systems but rather handles this issue at the level of the scope by covering all AI systems “that have potential to interfere with human rights, democracy, and the rule of law” and requiring appropriate risk assessment and mitigation measures to be implemented for all of them. The DFC does not specify what oversight mechanisms should be used but requires each party to establish or designate at least one effective mechanism to oversee compliance with the DFC. The enforcement of the DFC is multifaceted and involves a combination of national implementation, international cooperation, and a follow-up mechanism for oversight and consultation.

How are California, and Florida leading the race to regulate AI in 2024?

Several important AI laws were passed in 2023, including the EU AI Act, New York City Local Law 144, and Colorado's SB-169. In 2024, various US states, such as California and Florida, are introducing new AI laws to regulate AI and increase transparency. The EU AI Act is expected to be finalized this year, while compliance with AI regulations is becoming increasingly important. Holistic AI's Global Tracker can help organizations stay up-to-date on global AI initiatives and gain a competitive advantage. This article is for informational purposes only and does not provide legal advice.