January 2024
The Australian government has published an interim response outlining their plans to regulate high-risk AI systems in the country. The response is guided by key principles including a risk-based approach, collaboration and transparency, and a community-centric approach. Specific measures proposed include mandatory guardrails, testing and transparency initiatives, an AI safety standard, and funding for AI initiatives to support adoption and development. The government aims to strike a balance between fostering innovation and protecting community interests, particularly privacy and security, while addressing potential harms caused by high-risk AI systems. The response reflects Australia's commitment to responsible AI practices and international cooperation.
November 2023
Different countries have taken different approaches to promoting responsible AI, with the EU leading the way with its trio of laws targeting AI and algorithms. The UK has taken a light-touch approach through white papers, while the US has introduced laws at the state, federal, and local levels. China and Brazil have also passed multiple laws regulating AI. Australia’s AI Ethics Framework Discussion Paper proposes eight core principles for responsible AI, but the Australian government has yet to implement them into regulatory or legal requirements. Australia’s AI Action Plan outlines the government’s vision to position Australia as a global leader in responsible AI, but legal action has been taken against the government for failures of its automated debt recovery tool robodebt. Responsible AI is becoming increasingly important to avoid harm, minimize liability, and avoid reputational damage.