October 2024
Healthcare systems are using digital technologies, resulting in large amounts of data that can be analyzed by machine-learning algorithms to aid in diagnosis, prognosis, triage, and treatment of diseases. However, the translation of these algorithms into medical practice is hindered by a lack of careful evaluation in different settings. Guidelines for evaluating machine learning for health (ML4H) tools have been created to assess models for bias, interpretability, robustness, and possible failure modes. This study applied an ML4H audit framework to three use cases, which varied in findings but highlighted the importance of case-adapted quality assessment and fine-grained evaluation. This paper suggests improvements for future ML4H reference evaluation frameworks and discusses the challenges of assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are essential to facilitate the translation of machine learning algorithms into medical practice.
The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.
The use of AI technologies in financial institutions is expanding, with applications in marketing, process automation, and risk management. The EU AI Act introduces regulations for AI development and deployment in the sector, with specific requirements for high-risk use cases such as credit assessment and evaluation for life and health insurance. Financial institutions using AI must observe transparency rules and comply with HRAIS requirements, including drawing up technical documentation, implementing data governance measures, and establishing a risk management system. Compliance can be integrated into existing financial regulations, with penalties for non-compliance determined based on annual turnover worldwide. Financial institutions must properly prepare for compliance with the EU AI Act to avoid penalties.
The European AI Office has initiated the drafting process for the first-ever Code of Practice for general-purpose AI (GPAI) models under the EU AI Act. The Code of Practice will serve as a guiding framework to align with the stringent requirements of the Act and ensure compliance. Over 1,000 stakeholders are involved in the drafting process, which will span four rounds of reviews and consultations, with the final version expected to be published in April 2025. The Code of Practice provides guidelines for GPAI model providers to demonstrate compliance with legal obligations, including identifying and addressing systemic risks. If the Code of Practice is not ready or deemed inadequate by 2 August 2025, the European Commission may introduce common rules to ensure compliance with the AI Act.
September 2024
California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Models Act, or SB1047, which aimed to regulate the development and deployment of large-scale AI solutions in the state. The bill, which set strict safety standards for developers of AI models costing over $100m, and empowered the California Attorney General to hold them accountable for negligence that caused harm, was opposed by Big Tech. Newsom acknowledged the need for regulatory measures on AI development but criticised the bill for not being informed by "an empirical trajectory analysis of AI systems".