April 2024

Several US federal agencies, including the EEOC, Consumer Financial Protection Bureau, and the Federal Trade Commission, have issued a joint statement emphasizing their commitment to enforcing legal protections against discrimination and bias in automated systems and AI. The agencies also stress the applicability of existing laws to automated systems and encourage responsible innovation. The statement details how each agency has already enforced legal protections in relation to AI and automated systems, highlighting the importance of compliance with both existing laws and AI-specific laws. The statement warns of potential sources of unlawful discrimination from the use of automated systems, including training data, lack of transparency, and flawed assumptions about users and societal impact.

The EU's Artificial Intelligence Act (AI Act) introduces a framework for categorizing AI systems as either low-risk, high-risk, or prohibited. The AI Act prohibits AI systems that violate human dignity, freedom, equality, and privacy. Eight key AI practices are prohibited by the EU AI Act, including those that involve subliminal, manipulative, or deceptive AI techniques, exploitative systems that significantly distort behavior, AI systems used for classification or scoring of people based on behavior or personality characteristics, predictive policing based solely on AI profiling, AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, and AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings. Non-compliance with these prohibitions can result in significant administrative fines of up to €35,000,000 or up to 7% of an offender's global annual turnover. The rules on prohibited practices will be the first to start applying six months after the Act's publication in the Official Journal. The implications of using a prohibited AI system under the EU AI Act include hefty penalties.

04 Apr 2024
Environmental, social, and governance (ESG) standards are becoming increasingly important for investors, consumers, and regulators. However, achieving a strong ESG reporting practice is challenging due to a data sourcing problem and a dynamic regulatory environment globally. Companies are turning to Large Language Models (LLMs) for potential solutions to overcome these challenges. LLMs can help with data collection and analysis, better ratings, and estimates and predictions. Regulations on ESG reporting have increased globally, and there is also increasing focus on the environmental implications of AI, as well as the potential of AI to benefit ESG. To apply AI safely, appropriate guidelines and guardrails must be in place.

The White House Office of Management and Budget (OMB) has issued a policy on advancing governance, innovation, and risk management for agency use of artificial intelligence (AI). The policy directs federal agencies to promote AI innovation and governance while managing the risks associated with the technology, especially those impacting the safety and rights of the American public. This aligns with President Biden’s executive order on AI and creates a balance between responsibility and innovation while making federal agencies accountable for the development and risk management of AI. The policy outlines several AI governance, innovation, and risk management actions for federal agencies, including managing risks related to the procurement of AI, addressing risks from the use of AI, expanding transparency of AI use, advancing responsible AI innovation, and growing the AI workforce. Alongside the policy, measures such as increasing reporting on the 2024 Federal AI Use Case Inventory and issuing a Request for Information (RFI) on the responsible procurement of AI in government have been introduced to promote responsible AI use.
March 2024

The upcoming EU AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while establishing Europe as a leader in AI. The Act includes provisions concerning the environmental impact and energy consumption of AI systems, such as improving the resource performance of AI systems, reporting and documenting energy consumption, and encouraging compliance with environmental sustainability rules. The Act also establishes regulatory sandboxes to promote innovation under specific conditions, including high-level protection of the environment and energy sustainability. The EU AI Office and Member States will work together to draw up codes of conduct for voluntary application of specific requirements, including minimizing the impact of AI systems on environmental sustainability. The Act also requires regular evaluation and review of environmental provisions, including standardization deliverables and voluntary codes of conduct. Providers of general-purpose AI models must provide detailed information on computational resources used for training and energy consumption. The Act anticipates the fast-paced advancements in AI and allows for exemptions to conformity assessments in specific situations that ensure environmental protection and benefit society overall. Compliance requires a proactive, iterative approach.