April 2024

Biden Executive Order 14110 – A 6-Month Update

In October 2023, President Biden signed the Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, which established AI safety and security standards, promoted consumer protection, enabled AI innovation, and advanced American technology leadership. Federal departments and agencies have successfully completed all tasks within 90 days, 150 days, and 180 days. These tasks included establishing disclosure requirements for developers of the most advanced AI systems, evaluating potential AI risks affecting critical infrastructure, preventing foreign entities from developing AI for malicious purposes, expanding investment into AI innovation and talent attraction, and creating an AI task force to expand AI innovation in healthcare. Further actions include the establishment of AI Governance Boards, submission of a report on authenticating, labeling, and detecting synthetic content, and identifying clinical errors arising from the use of AI in healthcare. The EO sets foundations for AI risk management, regulation, and innovation in the US.

How can we ensure responsible AI in marketing?

The integration of AI with businesses is expanding globally, with approximately 42% of companies using AI in some way, and AI investment is expected to reach $200 billion globally by 2025. AI is transforming creative industries, particularly through the use of generative AI. While AI in marketing is not considered high-risk to society, improper use of it can have significant business implications. AI is commonly used in marketing for synthetic content generation, audience segmentation and sentiment analysis, SEO, email marketing, chatbots for customer relationship management, predictive analytics and dynamic pricing, and programmatic advertising. Organizations using AI tools in marketing must operationalize responsible AI practices to protect themselves from reputational, financial, and legal damage. Operationalizing responsible AI in marketing can include external third-party auditing, a focus on compliance, creating an AI inventory, and commitment from C-suite executives, specifically Chief Marketing Officers playing a pivotal role. Companies must comply with relevant regulations and voluntary action to implement AI governance frameworks can give organizations a competitive advantage. Companies must create a real-time inventory of their AI systems and foster cross-functional collaboration from the top to ensure responsible AI.

Individuals at the Heart of the EU AI Act: Decoding the Fundamental Rights Impact Assessment

The EU AI Act introduces a requirement for deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) to identify potential threats to individuals’ fundamental rights and implement adequate responsive actions against such threats. FRIAs must be conducted before the deployment of HRAI systems, and must include essential details such as the risks of harm that could affect individuals or groups potentially affected by the system's use. The EU AI Act defines fundamental rights according to the Charter of Fundamental Rights of the European Union and introduces a new impact assessment called Fundamental Rights Impact Assessment (“FRIA”) for AI systems to further strengthen the protection of EU citizens’ fundamental rights. FRIAs must be revised if pertinent factors, such as risks of harm or usage frequency, undergo changes according to the deployer's evaluation.

Navigating the Nexus: The EU's Cybersecurity Framework and AI Act in Concert

The increasing intertwining of artificial intelligence (AI) systems with digital networks has led to an increase in cyber threats to these systems. With cyberattacks projected to cost around EUR 9 trillion in 2024, the European Union's forthcoming Artificial Intelligence Act (EU AI Act) aims to fortify AI systems and models with solid cybersecurity measures. The EU AI Act imposes mandatory cybersecurity requirements on high-risk AI systems and general-purpose AI models with systemic risk. Certification of high-risk AI systems under the Cybersecurity Act's voluntary certification schemes may provide the presumption of conformity with the cybersecurity requirements of the EU AI Act, reducing duplication of costs. The EU AI Act also relates to other cybersecurity legislation such as the Cyber Resilience Act and the Cybersecurity Act, reinforcing the EU's cybersecurity framework. The EU AI Act's cybersecurity provisions mandate that high-risk AI systems demonstrate resilience against unauthorized attempts by third parties to manipulate their usage, outputs, or performance by exploiting vulnerabilities in the system. GPAI models with systemic risk are considered capable of triggering additional risks compared to basic GPAI models. Cybersecurity vulnerabilities in these models may increase additional risks or enhance the possibility of harmful consequences. Therefore, providers of GPAI models with systemic risk are obligated to provide an adequate level of cybersecurity protection for the model and its physical infrastructure.

Elements of NIST’s AI Risk Management Framework: An Overview

The NIST AI RMF is a voluntary risk management framework mandated under the National Artificial intelligence Initiative Act of 2020. It is designed to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific. The framework is operationalised through a combination of five tools or elements, which include the NIST Core, AI RMF Playbook, Roadmap, Crosswalks, and Use-Case Profiles. The NIST Core provides the foundation for trustworthy AI systems, with four key functions, Govern, Map, Measure, and Manage, to guide organizations in development and deployment across various domains. The AI RMF Playbook offers actionable guidance for implementing the AI RMF's functions through detailed sub-actions. The AI RMF Roadmap outlines NIST's strategy for advancing the AI RMF, focusing on collaboration and key activities to maintain its relevance. The AI RMF Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other. Finally, the AI RMF Use-case profiles provide tailored implementations of the AI RMF's functions and actions, catering to various sectors and use-cases.