March 2024

Organizations are increasingly investing in AI tools and systems, but the risks associated with them can cause major harms if appropriate business practices and safeguards are not put in place. Many AI systems fall under the scope of existing laws and the cost of non-compliance can be very high if an organization is sanctioned. Compliance with existing and AI-specific laws is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws, with the majority of the penalties issued in the EU where authorities have cracked down on the processing of data by AI systems under the GDPR. The US has also taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use. China has also started to crack down on AI misuse, due to the recent enactment of multiple laws relating to AI. It is essential to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.

President Biden has issued the Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. The order addresses concerns related to cybersecurity, national security, and privacy from mounting threats that the US is facing as adversaries seek unauthorized access to vast stores of sensitive personal and governmental data. The order imposes prohibitions or limitations on transactions involving the processing and exploitation of sensitive data by foreign adversaries, affecting the development, monitoring, and deployment of AI systems that depend on processing bulk sensitive personal data and government-related data. Furthermore, the order tasks key officials with formulating recommendations to identify, evaluate, and neutralize national security threats resulting from past data transfers, including healthcare data and human 'omic data.
February 2024

Artificial intelligence (AI)-driven medical devices are transforming the healthcare industry by enhancing diagnostic processes, formulating personalized treatment regimens, and benefiting surgical procedures and therapeutic strategies. However, they can have significant implications for an individual's health, necessitating the regulation of AI systems used in healthcare through both specific and horizontal pieces of legislation. The EU AI Act, taking a risk-based approach to obligations for AI systems used in the European Union, will impact AI-driven medical devices by classifying some as high-risk, enforcing stringent responsibilities for medical device market participants, and requiring compliance with sectoral regulations for medical devices. Market operators and enterprises need to adapt and transform their AI models and operations to meet the Act's requirements to avoid penalties and reputational damage.

The use of AI in businesses is becoming more widespread, with about 42% of companies currently using AI in some way. While AI can offer benefits such as automation, process streamlining, and personalization, it also comes with risks if used without appropriate safeguards and risk management. Laws are being introduced to codify responsible AI practices, with horizontal AI legislation affecting multiple applications of AI systems, such as the Algorithmic Accountability Act and the Federal Artificial Intelligence Risk Management Act. There are also initiatives targeting specific AI applications, including HR tech and generative AI, and online platforms and communications. Compliance with these laws is crucial for organizations, and preparation early on is the best way to ensure compliance.

The Federal Communications Commission (FCC) has issued a Declaratory Ruling that clarifies the application of the Telephone Consumer Protection Act of 1991 (TCPA) to artificial intelligence (AI) technologies, particularly those capable of generating human-like voices. The ruling confirms that the TCPA's restrictions on the use of "artificial or pre-recorded voice" encompass current AI technologies that simulate human voices or generate call content using pre-recorded voices. The regulation grants the FCC authority to impose fines on companies utilizing AI voices in their calls or to halt the service providers facilitating them, with fines potentially exceeding $23,000 per call. The ruling also allows call recipients to pursue legal action and potentially receive compensation of up to $1,500 for each unwanted call. The FCC's decision follows a Notice of Inquiry recognizing the need to address AI's potential to emulate human speech and pose new challenges to regulatory frameworks.