February 2024
Artificial intelligence (AI)-driven medical devices are transforming the healthcare industry by enhancing diagnostic processes, formulating personalized treatment regimens, and benefiting surgical procedures and therapeutic strategies. However, they can have significant implications for an individual's health, necessitating the regulation of AI systems used in healthcare through both specific and horizontal pieces of legislation. The EU AI Act, taking a risk-based approach to obligations for AI systems used in the European Union, will impact AI-driven medical devices by classifying some as high-risk, enforcing stringent responsibilities for medical device market participants, and requiring compliance with sectoral regulations for medical devices. Market operators and enterprises need to adapt and transform their AI models and operations to meet the Act's requirements to avoid penalties and reputational damage.
The use of AI in businesses is becoming more widespread, with about 42% of companies currently using AI in some way. While AI can offer benefits such as automation, process streamlining, and personalization, it also comes with risks if used without appropriate safeguards and risk management. Laws are being introduced to codify responsible AI practices, with horizontal AI legislation affecting multiple applications of AI systems, such as the Algorithmic Accountability Act and the Federal Artificial Intelligence Risk Management Act. There are also initiatives targeting specific AI applications, including HR tech and generative AI, and online platforms and communications. Compliance with these laws is crucial for organizations, and preparation early on is the best way to ensure compliance.
The Federal Communications Commission (FCC) has issued a Declaratory Ruling that clarifies the application of the Telephone Consumer Protection Act of 1991 (TCPA) to artificial intelligence (AI) technologies, particularly those capable of generating human-like voices. The ruling confirms that the TCPA's restrictions on the use of "artificial or pre-recorded voice" encompass current AI technologies that simulate human voices or generate call content using pre-recorded voices. The regulation grants the FCC authority to impose fines on companies utilizing AI voices in their calls or to halt the service providers facilitating them, with fines potentially exceeding $23,000 per call. The ruling also allows call recipients to pursue legal action and potentially receive compensation of up to $1,500 for each unwanted call. The FCC's decision follows a Notice of Inquiry recognizing the need to address AI's potential to emulate human speech and pose new challenges to regulatory frameworks.
The European Union's upcoming Artificial Intelligence Act (EU AI Act) will impact the integration of AI systems into autonomous vehicles, particularly in terms of safety, and classifies them according to their risk level. AI serves as the technological backbone for the automation capabilities in AVs, and the EU AI Act recognizes the risks that these systems pose if they fail. AV-related AI systems may be classified as high-risk AI systems under the EU AI Act, but they will primarily be governed by their sectoral legislation and not the standard requirement or obligation regime for high-risk AI systems. The EU AI Act also amends the sectoral legislation applicable to AVs and requires that the implementation acts to be adopted under that legislation take the high-risk AI system requirements into account. There are two primary regulations applicable to AVs in the EU: the Type-Approval Framework Regulation (TAFR) and the General Safety Regulation (GSR). Many AV-related AI systems will be classified as HRAIs under the EU AI Act due to the reference to the Union harmonization legislation, but the ordinary HRAIS regime will not apply to these systems.
The Federal Artificial Intelligence Environmental Impacts Act of 2024 has been introduced by Senator Markey and co-sponsored by several senators to address concerns regarding the sustainability and environmental impact of AI technologies. The bill defines AI as a machine-based system that can make decisions and recommendations influencing real or virtual environments and distinguishes it from an AI model. The bill has four key provisions, including a study on the environmental impacts of AI, the establishment of a consortium of stakeholders, a system for voluntary reporting, and a report to Congress on findings surrounding the environmental impact of AI. The bill emphasizes transparency and accountability throughout the process and aims to promote responsible innovation in the AI sector to achieve a greener and more resilient future.