April 2024

What you need to know about the proposed American Privacy Rights Act for AI

The American Privacy Rights Act (APRA) proposal, released by two bipartisan committee chairs, could lead to the first national data privacy framework in the US. It aims to solve the challenge caused by an absence of a national standard, and includes several consumer privacy provisions, restricts data collection and use, and creates a national registry of data brokers. The APRA does not specifically address AI, but its broad domain means it inadvertently covers AI systems that process personal data. Industry leaders have responded positively, but lawmakers are disappointed in the lack of protections for minors and may introduce complementary legislation. The bill has not yet been formally introduced, and Holistic AI can help maximise compliance with the new regulations.

US Federal Agencies Release a Joint Statement on Automated Systems

Several US federal agencies, including the EEOC, Consumer Financial Protection Bureau, and the Federal Trade Commission, have issued a joint statement emphasizing their commitment to enforcing legal protections against discrimination and bias in automated systems and AI. The agencies also stress the applicability of existing laws to automated systems and encourage responsible innovation. The statement details how each agency has already enforced legal protections in relation to AI and automated systems, highlighting the importance of compliance with both existing laws and AI-specific laws. The statement warns of potential sources of unlawful discrimination from the use of automated systems, including training data, lack of transparency, and flawed assumptions about users and societal impact.

AI Red Flags: Navigating Prohibited Practices under the AI Act

The EU's Artificial Intelligence Act (AI Act) introduces a framework for categorizing AI systems as either low-risk, high-risk, or prohibited. The AI Act prohibits AI systems that violate human dignity, freedom, equality, and privacy. Eight key AI practices are prohibited by the EU AI Act, including those that involve subliminal, manipulative, or deceptive AI techniques, exploitative systems that significantly distort behavior, AI systems used for classification or scoring of people based on behavior or personality characteristics, predictive policing based solely on AI profiling, AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, and AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings. Non-compliance with these prohibitions can result in significant administrative fines of up to €35,000,000 or up to 7% of an offender's global annual turnover. The rules on prohibited practices will be the first to start applying six months after the Act's publication in the Official Journal. The implications of using a prohibited AI system under the EU AI Act include hefty penalties.

How can AI enhance ESG Reporting?

Environmental, social, and governance (ESG) standards are becoming increasingly important for investors, consumers, and regulators. However, achieving a strong ESG reporting practice is challenging due to a data sourcing problem and a dynamic regulatory environment globally. Companies are turning to Large Language Models (LLMs) for potential solutions to overcome these challenges. LLMs can help with data collection and analysis, better ratings, and estimates and predictions. Regulations on ESG reporting have increased globally, and there is also increasing focus on the environmental implications of AI, as well as the potential of AI to benefit ESG. To apply AI safely, appropriate guidelines and guardrails must be in place.

OMB Policy on Governance, Innovation, and Risk Management

The White House Office of Management and Budget (OMB) has issued a policy on advancing governance, innovation, and risk management for agency use of artificial intelligence (AI). The policy directs federal agencies to promote AI innovation and governance while managing the risks associated with the technology, especially those impacting the safety and rights of the American public. This aligns with President Biden’s executive order on AI and creates a balance between responsibility and innovation while making federal agencies accountable for the development and risk management of AI. The policy outlines several AI governance, innovation, and risk management actions for federal agencies, including managing risks related to the procurement of AI, addressing risks from the use of AI, expanding transparency of AI use, advancing responsible AI innovation, and growing the AI workforce. Alongside the policy, measures such as increasing reporting on the 2024 Federal AI Use Case Inventory and issuing a Request for Information (RFI) on the responsible procurement of AI in government have been introduced to promote responsible AI use.