March 2024
Organizations are increasingly investing in AI tools and systems, but the risks associated with them can cause major harms if appropriate business practices and safeguards are not put in place. Many AI systems fall under the scope of existing laws and the cost of non-compliance can be very high if an organization is sanctioned. Compliance with existing and AI-specific laws is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws, with the majority of the penalties issued in the EU where authorities have cracked down on the processing of data by AI systems under the GDPR. The US has also taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use. China has also started to crack down on AI misuse, due to the recent enactment of multiple laws relating to AI. It is essential to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.
February 2024
Regulating artificial intelligence (AI) has become urgent, with countries proposing legislation to ensure responsible and safe application of AI to minimize potential harm. However, there is a lack of consensus on how to define AI, which poses a challenge for regulatory efforts. This article surveys the definitions of AI across multiple regulatory initiatives, including the ICO, EU AI Act, OECD, Canada’s Artificial Intelligence and Data Act, California’s proposed amendments, and more. While the definitions vary, they generally agree that AI systems have varying levels of autonomy, can have a variety of outputs, and require human involvement in defining objectives and providing input data.
December 2023
The US and UK have jointly published guidelines for secure AI system development in cybersecurity, which aim to reduce AI cybersecurity risks in terms of secure design, secure development, secure deployment, and secure operation and maintenance. The guidelines highlight the risk of adversarial machine learning, and focus specifically on systems that could result in significant physical or reputational damage or disrupt business operations or leak sensitive or confidential information. They encourage secure by design principles and promote the sharing of best practices. The joint publication is a step towards global cooperation, but it will take more than non-binding guidance to have a real global impact. Policymakers, lawmakers, and regulators are taking the risks of AI seriously, and AI risk management is now a competitive necessity.
November 2023
09 Nov 2023
More than a third of companies use artificial intelligence (AI) in their business practices, with an additional 42% exploring how the technology can be utilised, but there are risks involved if appropriate safeguards are not implemented, according to a blog post by Holistic AI. The potential for AI to breach existing laws has attracted the attention of regulators worldwide, with the EU AI Act aimed at becoming the global standard for AI regulation. Existing laws are often applied to AI cases, including non-discrimination laws or for violating data protection laws, resulting in significant penalties.
The G7 nations have unveiled International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI Developers, with 11 actionable guidelines for organisations involved in developing advanced foundational models. The guidelines include taking appropriate measures to identify and mitigate risks across the AI lifecycle, publicly reporting AI systems’ capabilities and limitations, and prioritising research to mitigate societal, safety, and security risks. The development is particularly relevant given the urgency among policymakers worldwide to chart regulatory pathways to govern AI responsibly, highlighted by several initiatives, including the Biden-Harris Administration’s Executive Order on AI and the establishment of the United Nations’ High Level Advisory Body.