November 2024

AI Tracker enhanced with interactive Atlas: Keep up to date with AI governance developments around the world

Holistic AI has launched the Holistic AI Tracker 2.0, which features an interactive world map called the Atlas to track activity across eight key areas. The Atlas has filters to help users drill down into specific regions, events and development sectors, with the ability to track changes and get notifications of updates. The AI Tracker is designed as a resource for the community and users are invited to suggest event additions for review by Holistic AI.

AI and Data Privacy: Key Challenges and Regulations

Generative AI models, particularly large language models (LLMs), pose privacy risks due to their reliance on vast datasets that often include sensitive information, presenting challenges for companies trying to comply with regulations like the European Union's General Data Protection Regulation (GDPR). The GDPR and other regulations emphasize responsible data use in AI, with specific rules for handling personally identifiable information (PII) and provisions for data minimization and privacy in AI-generated content. As privacy regulations multiply worldwide, companies must navigate this complex landscape carefully to avoid potential fines and compliance issues. Clear and comprehensive privacy policies, encryption, anonymization, regular audits for compliance, and AI ethics frameworks are essential. Evolving regulations seek to ensure responsible data governance and risk management in AI to address privacy concerns, potential biases, and unforeseen impacts on individuals. With the rapidly changing landscape of AI, it is crucial for organizations to prepare properly to avoid serious consequences.

October 2024

ML4H Auditing: From Paper to Practice

Healthcare systems are using digital technologies, resulting in large amounts of data that can be analyzed by machine-learning algorithms to aid in diagnosis, prognosis, triage, and treatment of diseases. However, the translation of these algorithms into medical practice is hindered by a lack of careful evaluation in different settings. Guidelines for evaluating machine learning for health (ML4H) tools have been created to assess models for bias, interpretability, robustness, and possible failure modes. This study applied an ML4H audit framework to three use cases, which varied in findings but highlighted the importance of case-adapted quality assessment and fine-grained evaluation. This paper suggests improvements for future ML4H reference evaluation frameworks and discusses the challenges of assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are essential to facilitate the translation of machine learning algorithms into medical practice.

Risk Management Profile for Artificial Intelligence and Human Rights

The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.

The implications of the EU AI Act for financial services

The use of AI technologies in financial institutions is increasing in areas such as marketing, process automation, and back-office functions. The EU AI Act introduces new regulations for AI-related risks in financial services, including strict requirements for high-risk AI systems used in credit assessments and life and health insurance evaluations. Financial institutions must observe transparency rules for all AI systems and comply with new AI-specific requirements for high-risk use cases. The Act allows for exclusion from high-risk status under certain conditions, and financial institutions must properly determine their role under the Act to comply with obligations. Some obligations can be integrated into existing legal frameworks for financial services. Proper preparation is important for compliance with the complex dimensions of AI governance in finance.