May 2024
Colorado has passed a law to protect consumers in their interactions with AI systems, SB24-205, which mandates regulation of high-risk AI systems and prevents algorithmic discrimination. Developers of high-risk systems must take reasonable precautions to prevent algorithmic discrimination, such as disclosing information and conducting impact assessments. High-risk system deployers must implement risk management protocols, conduct impact assessments, and provide consumers opportunities to correct incorrect data. The law applies to any person who does business in Colorado, and enforcement lies solely in the power of the Attorney General. The law provides defenses for developers or deployers if they comply with a nationally or internationally recognized AI risk management framework. There are no specified penalties for violations, and compliance can be maximized with Holistic AI's Governance Platform.
Governments around the world are increasingly coming together to address challenges posed by the development of AI systems. Discussions on safety, security, trustworthiness, and responsible development are at the forefront of these efforts. Notable international developments include the US and UK partnership to develop tests for advanced AI models, the United Nations General Assembly's adoption of its first landmark resolution on AI, the UK and Republic of Korea's upcoming AI Seoul Summit, joint international guidance on the secure deployment of AI systems, updates to the EU-US Terminology and Taxonomy for AI, and the US and China's agreement to develop a framework for the responsible development of AI. The growing importance of compliance in AI governance is also emphasized.
Enterprises are turning to voluntary frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), to reduce legal, reputational, and financial risk of their AI deployment. The AI RMF is a flexible framework that supports organizations using AI to manage the risks associated with it, encompassing four key functions of Govern, Map, Measure, and Manage. The Playbook serves as a practical companion to the AI RMF, offering actionable and adaptable guidance. The Govern function is critical to successful AI risk management, followed by the Map, Measure, and Manage functions. Each function includes suggested actions and recommended transparency and documentation practices. Prioritizing AI governance through AI risk management frameworks, such as the AI RMF, can increase trust and enhance ROI for AI systems.
April 2024
In October 2023, President Biden signed the Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, which established AI safety and security standards, promoted consumer protection, enabled AI innovation, and advanced American technology leadership. Federal departments and agencies have successfully completed all tasks within 90 days, 150 days, and 180 days. These tasks included establishing disclosure requirements for developers of the most advanced AI systems, evaluating potential AI risks affecting critical infrastructure, preventing foreign entities from developing AI for malicious purposes, expanding investment into AI innovation and talent attraction, and creating an AI task force to expand AI innovation in healthcare. Further actions include the establishment of AI Governance Boards, submission of a report on authenticating, labeling, and detecting synthetic content, and identifying clinical errors arising from the use of AI in healthcare. The EO sets foundations for AI risk management, regulation, and innovation in the US.
The integration of AI with businesses is expanding globally, with approximately 42% of companies using AI in some way, and AI investment is expected to reach $200 billion globally by 2025. AI is transforming creative industries, particularly through the use of generative AI. While AI in marketing is not considered high-risk to society, improper use of it can have significant business implications. AI is commonly used in marketing for synthetic content generation, audience segmentation and sentiment analysis, SEO, email marketing, chatbots for customer relationship management, predictive analytics and dynamic pricing, and programmatic advertising. Organizations using AI tools in marketing must operationalize responsible AI practices to protect themselves from reputational, financial, and legal damage. Operationalizing responsible AI in marketing can include external third-party auditing, a focus on compliance, creating an AI inventory, and commitment from C-suite executives, specifically Chief Marketing Officers playing a pivotal role. Companies must comply with relevant regulations and voluntary action to implement AI governance frameworks can give organizations a competitive advantage. Companies must create a real-time inventory of their AI systems and foster cross-functional collaboration from the top to ensure responsible AI.