November 2024

AI and Data Privacy: Key Challenges and Regulations

Generative AI models, particularly large language models (LLMs), pose privacy risks due to their reliance on vast datasets that often include sensitive information, presenting challenges for companies trying to comply with regulations like the European Union's General Data Protection Regulation (GDPR). The GDPR and other regulations emphasize responsible data use in AI, with specific rules for handling personally identifiable information (PII) and provisions for data minimization and privacy in AI-generated content. As privacy regulations multiply worldwide, companies must navigate this complex landscape carefully to avoid potential fines and compliance issues. Clear and comprehensive privacy policies, encryption, anonymization, regular audits for compliance, and AI ethics frameworks are essential. Evolving regulations seek to ensure responsible data governance and risk management in AI to address privacy concerns, potential biases, and unforeseen impacts on individuals. With the rapidly changing landscape of AI, it is crucial for organizations to prepare properly to avoid serious consequences.

October 2024

ML4H Auditing: From Paper to Practice

Healthcare systems are using digital technologies, resulting in large amounts of data that can be analyzed by machine-learning algorithms to aid in diagnosis, prognosis, triage, and treatment of diseases. However, the translation of these algorithms into medical practice is hindered by a lack of careful evaluation in different settings. Guidelines for evaluating machine learning for health (ML4H) tools have been created to assess models for bias, interpretability, robustness, and possible failure modes. This study applied an ML4H audit framework to three use cases, which varied in findings but highlighted the importance of case-adapted quality assessment and fine-grained evaluation. This paper suggests improvements for future ML4H reference evaluation frameworks and discusses the challenges of assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are essential to facilitate the translation of machine learning algorithms into medical practice.

Risk Management Profile for Artificial Intelligence and Human Rights

The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.

The implications of the EU AI Act for financial services

The use of AI technologies in financial institutions is increasing in areas such as marketing, process automation, and back-office functions. The EU AI Act introduces new regulations for AI-related risks in financial services, including strict requirements for high-risk AI systems used in credit assessments and life and health insurance evaluations. Financial institutions must observe transparency rules for all AI systems and comply with new AI-specific requirements for high-risk use cases. The Act allows for exclusion from high-risk status under certain conditions, and financial institutions must properly determine their role under the Act to comply with obligations. Some obligations can be integrated into existing legal frameworks for financial services. Proper preparation is important for compliance with the complex dimensions of AI governance in finance. 

Shaping the Future of General-Purpose AI: Europe Begins Drafting the First Code of Practice for General-Purpose AI Models

The European AI Office has initiated the drafting process for the first-ever Code of Practice for general-purpose AI (GPAI) models under the EU AI Act. The Code of Practice will serve as a guiding framework to align with the stringent requirements of the Act and ensure compliance. Over 1,000 stakeholders are involved in the drafting process, which will span four rounds of reviews and consultations, with the final version expected to be published in April 2025. The Code of Practice provides guidelines for GPAI model providers to demonstrate compliance with legal obligations, including identifying and addressing systemic risks. If the Code of Practice is not ready or deemed inadequate by 2 August 2025, the European Commission may introduce common rules to ensure compliance with the AI Act.