December 2022

21 Dec 2022
The article discusses the current regulatory environment surrounding artificial intelligence (AI) and the need for AI auditing to ensure the safety, legality and ethics of AI systems. The process of AI auditing involves four stages, including triage, assessment, mitigation, and assurance. The assessment phase evaluates the efficacy, robustness and safety, bias, explainability, and algorithm privacy of the system. The audit outcomes are used to inform the residual risk of the system, and mitigation actions are suggested to address the identified risks. The importance of conducting an audit of an AI system is highlighted, including improving stakeholder confidence and trust and future-proofing systems against regulatory changes.

The Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act) has been signed into law by President Biden, with the aim of educating federal agency personnel on the procurement and adoption of AI. The Act requires the Office of Management and Budget (OMB) to create or provide an AI training program to aid informed acquisition of AI by federal executive agencies, covering topics such as the science of AI, its benefits and risks, and future trends. The AI Training Act is part of a wider national commitment to trustworthy AI, including Executive Order 13960 and the Blueprint for an AI Bill of Rights.

The EU ministers have greenlit the adoption of a general approach to the EU AI Act, which aims to balance fundamental rights and the promotion of AI innovation by defining AI, expanding the scope of the act, clarifying governance, extending the prohibition of social scoring to private actors, designating high-risk systems, and clarifying compliance feasibility for high-risk systems. The final text includes several changes to increase transparency and simplify required conformity assessments. The European Council will now negotiate with the European Parliament, with an agreement expected to be reached by early 2024. Businesses are advised to take steps to manage the risks of AI systems to embrace AI with greater confidence.

Ethical AI refers to the safe and responsible use of artificial intelligence (AI). It involves three main approaches: principles, processes, and ethical consciousness. Ethical AI operationalizes AI Ethics, with a focus on four key verticals: safety, privacy, fairness, and transparency. Algorithm auditing is a key practice for determining how well a system performs on each of these verticals. While AI has many applications, such as conversational AI, ethics need to be prioritized to prevent poorly designed systems from being developed. The EU High-Level Expert Group on AI and the IEEE have formulated moral values that should be adhered to in the design and deployment of artificial intelligence. However, regulatory oversight and AI auditing are needed to bridge AI ethics from theory to practice.
November 2022

The District of Columbia has introduced the Stop Discrimination by Algorithms Act to prohibit the use of algorithms that make decisions based on protected characteristics like race, sex, gender, disability, religion and age. The legislation would require annual audits and transparency from organizations, with failure to comply resulting in individual fines of $10,000 each. This three-pronged approach aims to mitigate algorithmic bias and discrimination, with the penalties applying to businesses possessing or controlling information on over 25,000 Washington DC residents, data brokers processing personal information and service providers. While the Act has received support from policymakers and academics, industry groups have criticized it as a compliance burden that could result in decreased credit access and higher-cost loans. The Act has set a national precedent that other states may follow.