June 2024
Colorado has introduced several laws aimed at regulating the use of AI, including SB169 which prohibits discrimination in insurance practices, SB205 which provides consumer protections against algorithmic discrimination resulting from AI, HB1147 which addresses the use of deepfake content in electoral communications, and SB22-113 which mandates governance of AI-driven facial recognition technologies by state entities. The laws impose transparency, governance, and risk management obligations on insurers, developers, and deployers, and introduce penalties for non-compliance. As proposals for AI laws continue to accelerate, mapping exposure risks and staying on top of regulatory developments is essential for ensuring compliance.
The entertainment industry has rapidly embraced AI, leveraging its capabilities to revolutionize content creation, distribution, and user engagement. AI enables companies to better understand and cater to modern consumer-centric marketplaces, significantly enhancing operational practices and user experiences. One of the most prominent uses of AI in entertainment is through personalized content recommendations. AI is transforming the creation and production of content by identifying trends and preferences, guiding content creators in developing shows, movies, and music. AI aids in content distribution and regulatory compliance. AI automation also affects the industry's workforce. However, legal and ethical implications surround the authenticity of AI-generated content, and the complexities surrounding intellectual property rights are pivotal issues. As AI continues to blur the lines between human and machine authorship, it raises questions about originality, ownership, and the protection of creative works. Despite the challenges, AI's potential to democratize content creation and provide new opportunities for innovation remains promising. Entertainment organizations need to track the evolving AI regulatory landscape.
The EU AI Act has been approved and will be phased in gradually. The European Commission has launched the AI Pact to encourage industry players to comply with the forthcoming AI Act ahead of schedule. The Pact offers a framework for collaboration, early adoption of regulations, and responsible AI practices. Participants will play a central role by committing to declarations of engagement and sharing their policies and best practices. The Pact will operate during the transition period until the enforcement of the EU AI Act and may continue to operate afterward. Compliance with the Act is necessary to avoid penalties and reputational damage. Holistic AI can help organizations comply with the EU AI Act safely and confidently.
The adoption of AI has grown exponentially in recent years, becoming a competitive necessity for businesses. However, with AI incidents and harms on the rise, it is important to mitigate risks through AI governance. AI governance covers technical and non-technical aspects that make AI safer, secure, and ethical. Even low-risk applications of AI require AI governance to ensure the technology can be harnessed for business benefits and innovation. AI governance has benefits such as better visibility over AI deployments, reducing risk, better performance, and increased trust. AI governance is an ongoing process that requires regular evaluation. Holistic AI's Governance platform is a comprehensive solution for AI trust and safety.
May 2024
Artificial intelligence (AI) is increasingly being integrated into various areas of daily life, including healthcare. AI is revolutionizing healthcare by streamlining administrative tasks, improving diagnostics, and accelerating drug discovery. However, there are concerns about bias and discrimination perpetuated by AI algorithms and decision-making systems. Biases in healthcare AI have led to increased misdiagnoses and disparities in care. Regulatory initiatives are being undertaken in the US to mitigate these concerns, including the Final Rule on Non-Discrimination in Health Programs and Activities and the Health Data, Technology, and Interoperability regulation. States are also taking proactive measures to regulate AI in healthcare. To mitigate AI bias in healthcare, strategies such as diverse supervisory groups, obtaining additional data, and conducting bias risk assessments are being implemented.