October 2023

Does California have an AI Act?

California is among the states leading in proposing laws to regulate AI at the local, state, and federal levels. While California does not yet have a dedicated Artificial Intelligence Act, multiple initiatives have been proposed. California Assembly Bill 302 requires the Department of Technology to establish the inventory of all high-risk automated decision systems being used by any state agency and submit an annual report to the designated committees. Senate Bill 313 seeks to regulate the use of AI by state agencies and establish the Office of Artificial Intelligence within the Department of Technology. Assembly Bill 331 aims to prohibit algorithmic discrimination through automated decision tools and impose obligations on developers and deployers to conduct an impact assessment and establish a governance program to manage risks. Senate Concurrent Resolution 17 affirms California Legislature's commitment to the White House's Blueprint for an AI Bill of Rights. It is unclear whether California will have an AI Act, but policymakers are recognizing the need for AI legislation and targeting multiple critical applications.

September 2023

How is the FTC Regulating AI?

The Federal Trade Commission (FTC), which is responsible for protecting consumers and enforcing civil antitrust law in the US, is becoming increasingly vocal about the regulation of AI and the risks it poses to consumers and competition. The FTC has published warnings about biased AI, issued guidance on AI and algorithms, and highlighted the applicability of existing laws to AI, including section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act. The FTC has emphasized the need for AI transparency, accountability, and independent standards, as well as the importance of ensuring training data for algorithms is representative and high-quality. The FTC has also highlighted the role of regulators in preventing a small number of firms from dominating the market and hindering competition. The FTC is investigating OpenAI for consumer protection concerns related to its chatbot, ChatGTP, which made false claims about individuals and may have leaked data. Ongoing compliance and risk management are essential to protect against preventable harms and ensure that AI is an asset, not a liability.

ISO and IEC Make Foundational Standard on Artificial Intelligence Publicly Available

The International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC) have released a new AI standard, ISO/IEC 22989, to the public. The standard aims to provide a shared vocabulary and framework for essential AI concepts, promoting dialogue between stakeholders and paving the way for technical standards focused on responsible AI development and deployment. The standard defines more than 110 key concepts in the field of AI and clarifies the importance of multi-stakeholder consultations and the need for standardisation due to the lack of global alignment and consensus on crucial issues. Holistic AI can assist organisations in operationalising technical standards at scale to ensure AI systems are developed and deployed safely, effectively, and in line with compliance obligations.

EU AI Act: Leading Academics Call for Strengthened Fundamental Rights Impact Assessment and Audit Mechanisms

Over 110 AI, data governance, and civil rights academics have signed an appeal calling for all AI systems under the EU AI Act to undergo a Fundamental Rights Impact Assessment (FRIA). The appeal, from the Brussels Privacy Hub, aims to mitigate the risks associated with AI deployment through comprehensive ex-ante safeguards. Passed by the European Parliament on 14 June 2023, the final version of the EU AI Act is expected to be finalised by the end of the year, with penalties of up to €40 million or 7% of global turnover for non-compliance. The appeal also calls for the development of criteria for evaluating the impact of AI on fundamental rights, transparency on FRIA results, and participation and involvement of end-users and independent public authorities in the impact assessment process, among other provisions.

How is Switzerland planning to regulate AI?

Switzerland is actively monitoring AI regulatory developments in the EU and has its own guidelines for AI adoption in the federal administration. However, the country has yet to propose binding regulations for AI use. The Federal Council has called for increased efforts to converge with global initiatives and close the gap between Switzerland's approach and other international approaches to AI regulation. Switzerland is investing in AI research and development initiatives to position itself as a leader in the field. Holistic AI can help navigate global AI regulations. This article is for informational purposes only and does not provide legal advice.