January 2024
The European Commission has announced the creation of the European Artificial Intelligence Office (AI Office), a key part of the forthcoming AI Act. The office will contribute to the implementation and enforcement of the act, and will sit within the Commission's DG CNECT department. The AI Office will be financed by the Digital Europe Programme. The EU is expected to promote early voluntary compliance with the AI Act through the Commission and the AI Office. The act is likely to come into force in the coming months.
Generative AI is at the peak of the AI hype cycle and is being used for various applications, but there has been an increase in copyright lawsuits since generative AI requires vast amounts of data, often scraped from the internet. The lawsuits challenge the legality of training generative AI models on copyrighted content such as books. The article provides a summary of some key lawsuits over the past six months and predicts similar ones will continue to emerge in the future. The article's author offers Holistic AI as a solution for keeping track of AI regulations and lawsuits. The article's disclaimer notes that it is for informational purposes only and does not provide legal advice.
Online safety is becoming increasingly important, particularly due to risks and hazards on social media platforms. Governments are enacting legislation to mitigate these risks, such as the EU's Digital Services Act (DSA) and the UK's Online Safety Act (OSA). The DSA focuses on establishing accountability and transparency, while the OSA emphasizes a "duty of care" and categorizes entities by reach and risk. Enforcement of the DSA involves Digital Services Coordinators and European Commission oversight, while the OSA is enforced by Ofcom. The DSA has a broader coverage of digital services and illegal content, while the OSA takes a more tailored approach to specific online harms. Both acts provide for fines and penalties for non-compliance and audits to ensure compliance. Companies must implement proactive and reactive strategies to ensure compliance with these regulations and reduce harm on their platforms. Holistic AI offers solutions for compliance obligations, including independent audits for VLOPs and VLOSEs under the DSA.
The Federal Artificial Intelligence Risk Management Act of 2024 has been introduced in the US Congress, requiring federal agencies to comply with the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework, designed to help organizations prevent, detect, mitigate, and manage AI risks, sets out four key processes including mapping, measuring, managing, and governing. The Act also includes guidance for agencies on incorporating the AI RMF, reporting requirements, and regulations on AI acquisition. Compliance with NIST’s AI Risk Management Framework may soon become a legal requirement, as several states and federal laws already draw on it.
Lawmakers in the US are proposing laws to impose requirements on the use of automated employment decision tools (AEDTs) to reduce potential risks and mitigate bias and discrimination. New York City Local Law 144 set the precedent for bias audits on AEDTs, with Pennsylvania, New Jersey and New York State following suit with similar laws. The laws require independent, annual bias audits of AEDTs and the publication of a summary of the results on the employer or employment agency's website. Notifications to candidates and employees before the use of AEDTs are also mandated by the laws, with varying approaches to obtaining consent.