November 2023
Lawmakers around the world, including in the Western world and South America, are proposing laws to codify responsible artificial intelligence (AI) practices. Brazil has proposed three AI laws, with the latest bill, 2338/2023, taking a risk-based approach to AI regulation and placing human rights at its center. The bill requires that entities ensure transparency and mitigate biases, particularly in high-risk AI systems. Obligations under the bill are dependent on the level of risk posed by a system, with penalties for violating the bill ranging from fines to a suspension of the development or supply of the AI system. Companies developing and deploying AI will soon have a wave of legal requirements to navigate, and compliance is vital to promote safe and ethical AI.
US President Joe Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on 30 October 2023, in a bid to promote responsible AI use and encourage innovation while avoiding bias, discrimination, and harm. The order defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments" and requires the National Institute of Standards and Technology to establish guidelines for trustworthy AI systems. The order also covers dual-use foundation models, Infrastructure as a Service products, synthetic content, equity and civil rights, and healthcare.
09 Nov 2023
More than a third of companies use artificial intelligence (AI) in their business practices, with an additional 42% exploring how the technology can be utilised, but there are risks involved if appropriate safeguards are not implemented, according to a blog post by Holistic AI. The potential for AI to breach existing laws has attracted the attention of regulators worldwide, with the EU AI Act aimed at becoming the global standard for AI regulation. Existing laws are often applied to AI cases, including non-discrimination laws or for violating data protection laws, resulting in significant penalties.
The G7 nations have unveiled International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI Developers, with 11 actionable guidelines for organisations involved in developing advanced foundational models. The guidelines include taking appropriate measures to identify and mitigate risks across the AI lifecycle, publicly reporting AI systems’ capabilities and limitations, and prioritising research to mitigate societal, safety, and security risks. The development is particularly relevant given the urgency among policymakers worldwide to chart regulatory pathways to govern AI responsibly, highlighted by several initiatives, including the Biden-Harris Administration’s Executive Order on AI and the establishment of the United Nations’ High Level Advisory Body.
The Consumer Financial Protection Bureau (CFPB) is signaling its intentions to regulate AI in the financial sector. It released a joint statement on AI and automated systems with other federal agencies in April 2023, proposed a new rule for AI in home appraisals in June 2023, and issued a spotlight on the use of AI-driven chatbots in banking in the same month. On 19 September 2023, the CFPB published Circular 2023-03, clarifying adverse action notices provided by creditors, which must be specific and accurate. The financial services sector must ensure steps are taken to manage the risks of AI.