November 2024
The EU AI Act addresses the use of biometric technologies and their implications for privacy, security, and fundamental rights. Biometric data is defined as personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person. The Act prohibits certain biometrics-related AI practices, such as real-time remote biometric identification systems in public spaces and biometric categorization systems inferring sensitive characteristics. Biometric verification is not prohibited but is still subject to GDPR and other applicable laws. High-risk biometrics-related AI systems, such as emotion recognition and remote biometric identification, must undergo a stricter conformity assessment involving a notified body. Organizations must evaluate their use cases against the Act's provisions to ensure lawful and ethical operation within the EU.
Holistic AI has launched the Holistic AI Tracker 2.0, which features an interactive world map called the Atlas to track activity across eight key areas. The Atlas has filters to help users drill down into specific regions, events and development sectors, with the ability to track changes and get notifications of updates. The AI Tracker is designed as a resource for the community and users are invited to suggest event additions for review by Holistic AI.
Generative AI models, particularly large language models (LLMs), pose privacy risks due to their reliance on vast datasets that often include sensitive information, presenting challenges for companies trying to comply with regulations like the European Union's General Data Protection Regulation (GDPR). The GDPR and other regulations emphasize responsible data use in AI, with specific rules for handling personally identifiable information (PII) and provisions for data minimization and privacy in AI-generated content. As privacy regulations multiply worldwide, companies must navigate this complex landscape carefully to avoid potential fines and compliance issues. Clear and comprehensive privacy policies, encryption, anonymization, regular audits for compliance, and AI ethics frameworks are essential. Evolving regulations seek to ensure responsible data governance and risk management in AI to address privacy concerns, potential biases, and unforeseen impacts on individuals. With the rapidly changing landscape of AI, it is crucial for organizations to prepare properly to avoid serious consequences.
October 2024
Healthcare systems are using digital technologies, resulting in large amounts of data that can be analyzed by machine-learning algorithms to aid in diagnosis, prognosis, triage, and treatment of diseases. However, the translation of these algorithms into medical practice is hindered by a lack of careful evaluation in different settings. Guidelines for evaluating machine learning for health (ML4H) tools have been created to assess models for bias, interpretability, robustness, and possible failure modes. This study applied an ML4H audit framework to three use cases, which varied in findings but highlighted the importance of case-adapted quality assessment and fine-grained evaluation. This paper suggests improvements for future ML4H reference evaluation frameworks and discusses the challenges of assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are essential to facilitate the translation of machine learning algorithms into medical practice.
The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.