January 2024

The Australian government has published an interim response outlining their plans to regulate high-risk AI systems in the country. The response is guided by key principles including a risk-based approach, collaboration and transparency, and a community-centric approach. Specific measures proposed include mandatory guardrails, testing and transparency initiatives, an AI safety standard, and funding for AI initiatives to support adoption and development. The government aims to strike a balance between fostering innovation and protecting community interests, particularly privacy and security, while addressing potential harms caused by high-risk AI systems. The response reflects Australia's commitment to responsible AI practices and international cooperation.

There are various laws proposed at different levels of government in the US to regulate the use of AI and reduce potential harm. While many of these laws focus on sectors like HR and insurance, there is increasing attention to the use of AI in healthcare, which requires unique considerations and policies due to the novel risks it introduces. Some of the proposed AI laws in healthcare include the Better Mental Health Care for Americans Act, the Health Technology Act of 2023, and the Pandemic and All-Hazards Preparedness and Response Act at the Federal level, and the Safe Patients Limit Act in Illinois and An act regulating the use of artificial intelligence in providing mental health services in Massachusetts at the State level. A law in Virginia regulating hospitals, nursing homes, and certified nursing facilities' use of intelligent personal assistants has already gone into effect. Additionally, the World Health Organization has published guidelines to ensure responsible AI practices in healthcare.

Autonomous Vehicles or Self-Driving Cars are estimated to generate between $300 and $400 billion in revenue by 2035, but concerns over safety persist. The National Highway Traffic Safety Administration (NHSTA) has identified five levels of autonomous capabilities, ranging from Momentary Driver Assistance to Full Automation. The federal law passed in 2017, the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act, paved the way for state-level legislation on autonomous vehicles. Many states have introduced their own autonomous vehicle laws, with some hitting the ground running in 2024. These laws set out conditions for an autonomous vehicle to operate, such as complying with traffic laws and submitting proof of financial responsibility. It is important to stay on top of these developments, and Holistic AI’s Global Tracker can help navigate the landscape with confidence.

The European Commission has announced the creation of the European Artificial Intelligence Office (AI Office), a key part of the forthcoming AI Act. The office will contribute to the implementation and enforcement of the act, and will sit within the Commission's DG CNECT department. The AI Office will be financed by the Digital Europe Programme. The EU is expected to promote early voluntary compliance with the AI Act through the Commission and the AI Office. The act is likely to come into force in the coming months.

Generative AI is at the peak of the AI hype cycle and is being used for various applications, but there has been an increase in copyright lawsuits since generative AI requires vast amounts of data, often scraped from the internet. The lawsuits challenge the legality of training generative AI models on copyrighted content such as books. The article provides a summary of some key lawsuits over the past six months and predicts similar ones will continue to emerge in the future. The article's author offers Holistic AI as a solution for keeping track of AI regulations and lawsuits. The article's disclaimer notes that it is for informational purposes only and does not provide legal advice.