October 2024
Healthcare systems are using digital technologies, resulting in large amounts of data that can be analyzed by machine-learning algorithms to aid in diagnosis, prognosis, triage, and treatment of diseases. However, the translation of these algorithms into medical practice is hindered by a lack of careful evaluation in different settings. Guidelines for evaluating machine learning for health (ML4H) tools have been created to assess models for bias, interpretability, robustness, and possible failure modes. This study applied an ML4H audit framework to three use cases, which varied in findings but highlighted the importance of case-adapted quality assessment and fine-grained evaluation. This paper suggests improvements for future ML4H reference evaluation frameworks and discusses the challenges of assessing bias, interpretability, and robustness. Standardized evaluation and reporting of ML4H quality are essential to facilitate the translation of machine learning algorithms into medical practice.
May 2024
Artificial intelligence (AI) is increasingly being integrated into various areas of daily life, including healthcare. AI is revolutionizing healthcare by streamlining administrative tasks, improving diagnostics, and accelerating drug discovery. However, there are concerns about bias and discrimination perpetuated by AI algorithms and decision-making systems. Biases in healthcare AI have led to increased misdiagnoses and disparities in care. Regulatory initiatives are being undertaken in the US to mitigate these concerns, including the Final Rule on Non-Discrimination in Health Programs and Activities and the Health Data, Technology, and Interoperability regulation. States are also taking proactive measures to regulate AI in healthcare. To mitigate AI bias in healthcare, strategies such as diverse supervisory groups, obtaining additional data, and conducting bias risk assessments are being implemented.
February 2024
Artificial intelligence (AI)-driven medical devices are transforming the healthcare industry by enhancing diagnostic processes, formulating personalized treatment regimens, and benefiting surgical procedures and therapeutic strategies. However, they can have significant implications for an individual's health, necessitating the regulation of AI systems used in healthcare through both specific and horizontal pieces of legislation. The EU AI Act, taking a risk-based approach to obligations for AI systems used in the European Union, will impact AI-driven medical devices by classifying some as high-risk, enforcing stringent responsibilities for medical device market participants, and requiring compliance with sectoral regulations for medical devices. Market operators and enterprises need to adapt and transform their AI models and operations to meet the Act's requirements to avoid penalties and reputational damage.
January 2024
There are various laws proposed at different levels of government in the US to regulate the use of AI and reduce potential harm. While many of these laws focus on sectors like HR and insurance, there is increasing attention to the use of AI in healthcare, which requires unique considerations and policies due to the novel risks it introduces. Some of the proposed AI laws in healthcare include the Better Mental Health Care for Americans Act, the Health Technology Act of 2023, and the Pandemic and All-Hazards Preparedness and Response Act at the Federal level, and the Safe Patients Limit Act in Illinois and An act regulating the use of artificial intelligence in providing mental health services in Massachusetts at the State level. A law in Virginia regulating hospitals, nursing homes, and certified nursing facilities' use of intelligent personal assistants has already gone into effect. Additionally, the World Health Organization has published guidelines to ensure responsible AI practices in healthcare.
October 2023
Policymakers around the world are looking to regulate the use of AI in critical applications such as healthcare to address the potential risks and implications for patient care and wellbeing. The EU AI Act establishes a risk-based approach, categorizing systems based on their level of risk, and outlining specific obligations to be met. In the US, there are horizontal pieces of legislation, such as the Algorithmic Accountability Act and DC's Stop Discrimination by Algorithms Act, which address issues such as bias and discrimination. However, experts argue that given the unique risks and needs in healthcare, specific regulation for AI in healthcare is necessary to prevent harm while allowing appropriate considerations for patient demographics.