November 2023
09 Nov 2023
More than a third of companies use artificial intelligence (AI) in their business practices, with an additional 42% exploring how the technology can be utilised, but there are risks involved if appropriate safeguards are not implemented, according to a blog post by Holistic AI. The potential for AI to breach existing laws has attracted the attention of regulators worldwide, with the EU AI Act aimed at becoming the global standard for AI regulation. Existing laws are often applied to AI cases, including non-discrimination laws or for violating data protection laws, resulting in significant penalties.
The G7 nations have unveiled International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI Developers, with 11 actionable guidelines for organisations involved in developing advanced foundational models. The guidelines include taking appropriate measures to identify and mitigate risks across the AI lifecycle, publicly reporting AI systems’ capabilities and limitations, and prioritising research to mitigate societal, safety, and security risks. The development is particularly relevant given the urgency among policymakers worldwide to chart regulatory pathways to govern AI responsibly, highlighted by several initiatives, including the Biden-Harris Administration’s Executive Order on AI and the establishment of the United Nations’ High Level Advisory Body.
The Consumer Financial Protection Bureau (CFPB) is signaling its intentions to regulate AI in the financial sector. It released a joint statement on AI and automated systems with other federal agencies in April 2023, proposed a new rule for AI in home appraisals in June 2023, and issued a spotlight on the use of AI-driven chatbots in banking in the same month. On 19 September 2023, the CFPB published Circular 2023-03, clarifying adverse action notices provided by creditors, which must be specific and accurate. The financial services sector must ensure steps are taken to manage the risks of AI.
October 2023
On October 30th, 2023, the Biden Administration signed an Executive Order on artificial intelligence aimed at establishing safety and security standards to protect Americans' privacy and civil rights. The order sets strict standards for AI testing, requires agencies to establish standards for biological synthesis screening and cybersecurity programs, and directs AI safety for the military and intelligence. It also aims to protect the equal opportunity and non-discrimination rights of U.S. citizens, provide resources to shape the transformative potential of AI in education and healthcare, and address job displacement caused by AI. The order promotes innovation and competition by providing support for small developers and entrepreneurs and expanding visas for skilled immigrants. The order also directs actions for international cooperation towards safe, secure, and trustworthy AI. The order requires AI companies to disclose their safety testing procedures and results to the U.S. Federal government. The Executive Order follows other recent efforts towards responsible AI by the Biden Administration.
Legislation and regulation are increasingly recognized as important to promote safety, fairness, and ethics in the use of AI tools. While the US has made progress on vertical legislation targeting specific use cases, Europe has made strides in horizontal legislation targeting multiple use cases at once, with the EU AI Act seeking to become the global gold standard for AI regulation. The Algorithmic Accountability Act, introduced for the third time in the US, targets automated decision systems used in augmented critical decision processes and applies to entities over which the Federal Trade Commission has jurisdiction. Covered entities must conduct ongoing algorithmic impact assessments and submit annual summary reports to the FTC. The US is determined to impose more conditions on the use of algorithms and AI, with enterprises needing to navigate an influx of rules.