February 2023

The Need for Risk Management in AI Systems

Organizations are investing in AI tools and strategies to streamline their processes and gain a competitive edge, but this transition to a reliance on AI comes with heightened risk that must be managed. The National Institute of Standards and Technology (NIST) defines AI risks as potential harms resulting from developing and deploying AI systems. Effective AI governance, risk, and compliance processes enable organizations to identify and manage these risks. AI Risk Management involves identifying, assessing, and managing risks associated with using AI technologies and addressing both technical and non-technical risks. The potential risks and benefits of AI must be understood, and strategies and policies developed to mitigate potential risks. AI regulation is coming, and transparency regarding AI algorithms is a crucial first step. Organizations that implement a risk management framework can move away from a costly, reactive, ad hoc approach to regulation and increase trust and scale with confidence. While AI adoption is soaring, risk management is lagging, and companies need to implement responsible AI programs sooner rather than later to avoid reputational damage and facilitate legal compliance.

Overview of Large Language Models: From Transformer Architecture to Prompt Engineering

AI-based conversational agents such as ChatGPT and Bard have become increasingly popular and are entering our daily lives through browsers and communication platforms. The key to staying ahead is to be aware of the new technology trends, particularly the innovative Transformer architecture of deep learning models that has redefined the way we process natural language text. Language modelling approaches such as Masked Language Modeling (MLM) from BERT and Causal Language Modeling (CLM) from GPT have represented a significant leap forward in NLP technology, but each model has its limitations. To make language models more scalable for commercial solutions, researchers and engineers have sought a new approach such as InstructGPT and LaMDA. These technologies use fine-tuning and reinforcement learning strategies applied to human feedback to meet the user's requests more accurately.

AI Regulation Around the World: The Netherlands

The Dutch government is increasing oversight on AI systems following a scandal involving a biased algorithm used by their tax office. The government is committed to a new statutory regime that ensures AI systems are checked for transparency and discrimination, and the data protection regulator will receive extra funding for algorithm oversight. The Dutch government wants more transparency about AI systems deployed in the public sector and is proposing a legal requirement to use an assessment framework, a register of high-risk AI systems, and specific measures for human oversight and non-discrimination. The proposals currently apply to the public sector but will likely impact businesses supplying AI systems to the public sector and create greater public awareness of the use of AI systems.

The Rise of Large Language Models: Galactica, ChatGPT, and Bard

Large language models (LLMs) such as Galactica, ChatGPT, and BARD are gaining popularity and being integrated into various aspects of daily life. While they have many benefits, society must understand their limitations, biases, and regulatory issues. The use of LLMs raises concerns about responsibility for generated responses and measuring/mitigating bias and discrimination in the models. Regulation could include data representativeness audits and human evaluations to ensure fair and accurate results. It is essential to comprehend the workings of LLMs and put governance mechanisms in place to address associated risks.

Transparency in HR Business Practices: A Legislative Overview

Human resources teams in business organizations have been using artificial intelligence (AI) technology to innovate their practices, particularly in talent sourcing and management. However, such practices are now being targeted by regulations promoting transparency, such as pay transparency laws. In 2021, Colorado became the first state to enact a pay transparency law requiring employers to disclose salary or pay ranges for open positions. Other states have followed suit, such as California, Washington, and Rhode Island. New York City has also mandated bias audits for automated employment decision tools to evaluate employees for promotion or candidates for employment. The legislation requires employers subject to the audit to make a summary of the results of the bias audit publicly available on their website. Companies are expected to inform consumers about their use of AI and when they are interacting with it. Holistic AI, a responsible AI pioneer, can help enterprises adopt and scale AI confidently by identifying and mitigating risks and using proposed regulations to inform their product.