June 2024

AI Governance: What you need to know

The adoption of AI has grown exponentially in recent years, becoming a competitive necessity for businesses. However, with AI incidents and harms on the rise, it is important to mitigate risks through AI governance. AI governance covers technical and non-technical aspects that make AI safer, secure, and ethical. Even low-risk applications of AI require AI governance to ensure the technology can be harnessed for business benefits and innovation. AI governance has benefits such as better visibility over AI deployments, reducing risk, better performance, and increased trust. AI governance is an ongoing process that requires regular evaluation. Holistic AI's Governance platform is a comprehensive solution for AI trust and safety.

May 2024

Advancing Healthcare: AI Adoption, Bias, and Regulatory Initiatives in the US

Artificial intelligence (AI) is increasingly being integrated into various areas of daily life, including healthcare. AI is revolutionizing healthcare by streamlining administrative tasks, improving diagnostics, and accelerating drug discovery. However, there are concerns about bias and discrimination perpetuated by AI algorithms and decision-making systems. Biases in healthcare AI have led to increased misdiagnoses and disparities in care. Regulatory initiatives are being undertaken in the US to mitigate these concerns, including the Final Rule on Non-Discrimination in Health Programs and Activities and the Health Data, Technology, and Interoperability regulation. States are also taking proactive measures to regulate AI in healthcare. To mitigate AI bias in healthcare, strategies such as diverse supervisory groups, obtaining additional data, and conducting bias risk assessments are being implemented.

Singapore Unveils Comprehensive Framework for Governing Generative AI

Singapore released the Model AI Governance Framework for Generative AI in May 2024, which offers a comprehensive approach to managing the challenges of generative AI and encourages a global dialogue on the topic. The framework covers nine dimensions related to AI governance, including accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment R&D, and AI for public good. The framework stresses the importance of collaboration between policymakers, industry stakeholders, researchers, and like-minded jurisdictions. It calls for accountability in AI development and usage, responsible data governance, and democratization of AI access.

Navigating the Governance Architecture of the EU AI Act

The EU AI Act introduces a governance structure to ensure coordinated and effective implementation and enforcement of AI regulations at the national and Union levels. The governance framework includes four entities: the AI Office, AI Board, Advisory Forum, and Scientific Panel, each with distinct roles and responsibilities. The AI Office leads the implementation and enforcement of the Act, while the AI Board advises and assists in its consistent application across the EU. The Advisory Forum provides technical expertise and stakeholder input, and the Scientific Panel supports the Act's implementation with scientific insights and guidance. Experts selected for these entities must possess relevant competencies, independence, and scientific or technical expertise in the field of AI. Compliance with the EU AI Act is crucial, and early adoption of its principles can enable smoother compliance.

Setting The Standards for AI: The EU AI Act’s Scheme for the Standardization of AI Systems

The EU AI Act introduces standardization instruments such as harmonized standards to facilitate compliance with the Act's requirements and obligations. Providers of high-risk AI systems and general-purpose AI models can enjoy a presumption of compliance if they follow these standardization tools. However, standardization is not mandatory, and providers who do not follow them may face additional workload and penalties for non-compliance. Harmonized standards are expected to cover the requirements for high-risk AI systems and the obligations of providers of GPAI models and GPAI models with systemic risk. Compliance with these standards can help bypass third-party conformity assessments for certain high-risk AI systems, but providers must still ensure compliance for requirements and obligations outside the scope of harmonized standards. The EU AI Act does not become fully operational until mid-2026, but market operators must prepare in advance to comply with the evolving regulatory framework around AI.