March 2024

Securing the Digital Realm from Foreign Actors through Data Rules: Unlocking Biden's Executive Order on Personal Data and National Security

President Biden has issued the Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern. The order addresses concerns related to cybersecurity, national security, and privacy from mounting threats that the US is facing as adversaries seek unauthorized access to vast stores of sensitive personal and governmental data. The order imposes prohibitions or limitations on transactions involving the processing and exploitation of sensitive data by foreign adversaries, affecting the development, monitoring, and deployment of AI systems that depend on processing bulk sensitive personal data and government-related data. Furthermore, the order tasks key officials with formulating recommendations to identify, evaluate, and neutralize national security threats resulting from past data transfers, including healthcare data and human 'omic data.

February 2024

Tech and Treatment: Comprehending the EU AI Act's Impact on Medical Devices

Artificial intelligence (AI)-driven medical devices are transforming the healthcare industry by enhancing diagnostic processes, formulating personalized treatment regimens, and benefiting surgical procedures and therapeutic strategies. However, they can have significant implications for an individual's health, necessitating the regulation of AI systems used in healthcare through both specific and horizontal pieces of legislation. The EU AI Act, taking a risk-based approach to obligations for AI systems used in the European Union, will impact AI-driven medical devices by classifying some as high-risk, enforcing stringent responsibilities for medical device market participants, and requiring compliance with sectoral regulations for medical devices. Market operators and enterprises need to adapt and transform their AI models and operations to meet the Act's requirements to avoid penalties and reputational damage.

US Federal AI Legislation in 2024: The Current Landscape

The use of AI in businesses is becoming more widespread, with about 42% of companies currently using AI in some way. While AI can offer benefits such as automation, process streamlining, and personalization, it also comes with risks if used without appropriate safeguards and risk management. Laws are being introduced to codify responsible AI practices, with horizontal AI legislation affecting multiple applications of AI systems, such as the Algorithmic Accountability Act and the Federal Artificial Intelligence Risk Management Act. There are also initiatives targeting specific AI applications, including HR tech and generative AI, and online platforms and communications. Compliance with these laws is crucial for organizations, and preparation early on is the best way to ensure compliance.

FCC Declares AI-Generated Robocalls Illegal: A Step Towards Consumer Protection

The Federal Communications Commission (FCC) has issued a Declaratory Ruling that clarifies the application of the Telephone Consumer Protection Act of 1991 (TCPA) to artificial intelligence (AI) technologies, particularly those capable of generating human-like voices. The ruling confirms that the TCPA's restrictions on the use of "artificial or pre-recorded voice" encompass current AI technologies that simulate human voices or generate call content using pre-recorded voices. The regulation grants the FCC authority to impose fines on companies utilizing AI voices in their calls or to halt the service providers facilitating them, with fines potentially exceeding $23,000 per call. The ruling also allows call recipients to pursue legal action and potentially receive compensation of up to $1,500 for each unwanted call. The FCC's decision follows a Notice of Inquiry recognizing the need to address AI's potential to emulate human speech and pose new challenges to regulatory frameworks.

Driving Innovation: Navigating the EU AI Act's Impact on Autonomous Vehicles

The European Union's upcoming Artificial Intelligence Act (EU AI Act) will impact the integration of AI systems into autonomous vehicles, particularly in terms of safety, and classifies them according to their risk level. AI serves as the technological backbone for the automation capabilities in AVs, and the EU AI Act recognizes the risks that these systems pose if they fail. AV-related AI systems may be classified as high-risk AI systems under the EU AI Act, but they will primarily be governed by their sectoral legislation and not the standard requirement or obligation regime for high-risk AI systems. The EU AI Act also amends the sectoral legislation applicable to AVs and requires that the implementation acts to be adopted under that legislation take the high-risk AI system requirements into account. There are two primary regulations applicable to AVs in the EU: the Type-Approval Framework Regulation (TAFR) and the General Safety Regulation (GSR). Many AV-related AI systems will be classified as HRAIs under the EU AI Act due to the reference to the Union harmonization legislation, but the ordinary HRAIS regime will not apply to these systems.