March 2024
The EU AI Act is the first comprehensive legal framework governing AI use across different applications, with a risk-based approach for different AI systems. It includes entities based in the EU and organizations that employ AI in interactions with EU residents. AI systems are classified as prohibited, high-risk, or minimal risk, with general-purpose AI (GPAI) models subject to further assessment and different obligations. There are design-related requirements for high-risk AI systems, and transparency obligations for limited risk AI systems. Non-compliance with the Act carries significant penalties. It is crucial for organizations to determine their system's classification and establish a risk management framework to prepare for the Act.
Organizations are increasingly investing in AI tools and systems, but the risks associated with them can cause major harms if appropriate business practices and safeguards are not put in place. Many AI systems fall under the scope of existing laws and the cost of non-compliance can be very high if an organization is sanctioned. Compliance with existing and AI-specific laws is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws, with the majority of the penalties issued in the EU where authorities have cracked down on the processing of data by AI systems under the GDPR. The US has also taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use. China has also started to crack down on AI misuse, due to the recent enactment of multiple laws relating to AI. It is essential to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.
President Biden has issued the Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. The order addresses concerns related to cybersecurity, national security, and privacy from mounting threats that the US is facing as adversaries seek unauthorized access to vast stores of sensitive personal and governmental data. The order imposes prohibitions or limitations on transactions involving the processing and exploitation of sensitive data by foreign adversaries, affecting the development, monitoring, and deployment of AI systems that depend on processing bulk sensitive personal data and government-related data. Furthermore, the order tasks key officials with formulating recommendations to identify, evaluate, and neutralize national security threats resulting from past data transfers, including healthcare data and human 'omic data.
February 2024
Artificial intelligence (AI)-driven medical devices are transforming the healthcare industry by enhancing diagnostic processes, formulating personalized treatment regimens, and benefiting surgical procedures and therapeutic strategies. However, they can have significant implications for an individual's health, necessitating the regulation of AI systems used in healthcare through both specific and horizontal pieces of legislation. The EU AI Act, taking a risk-based approach to obligations for AI systems used in the European Union, will impact AI-driven medical devices by classifying some as high-risk, enforcing stringent responsibilities for medical device market participants, and requiring compliance with sectoral regulations for medical devices. Market operators and enterprises need to adapt and transform their AI models and operations to meet the Act's requirements to avoid penalties and reputational damage.
The use of AI in businesses is becoming more widespread, with about 42% of companies currently using AI in some way. While AI can offer benefits such as automation, process streamlining, and personalization, it also comes with risks if used without appropriate safeguards and risk management. Laws are being introduced to codify responsible AI practices, with horizontal AI legislation affecting multiple applications of AI systems, such as the Algorithmic Accountability Act and the Federal Artificial Intelligence Risk Management Act. There are also initiatives targeting specific AI applications, including HR tech and generative AI, and online platforms and communications. Compliance with these laws is crucial for organizations, and preparation early on is the best way to ensure compliance.