December 2022
The Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act) has been signed into law by President Biden, with the aim of educating federal agency personnel on the procurement and adoption of AI. The Act requires the Office of Management and Budget (OMB) to create or provide an AI training program to aid informed acquisition of AI by federal executive agencies, covering topics such as the science of AI, its benefits and risks, and future trends. The AI Training Act is part of a wider national commitment to trustworthy AI, including Executive Order 13960 and the Blueprint for an AI Bill of Rights.
November 2022
The District of Columbia has introduced the Stop Discrimination by Algorithms Act to prohibit the use of algorithms that make decisions based on protected characteristics like race, sex, gender, disability, religion and age. The legislation would require annual audits and transparency from organizations, with failure to comply resulting in individual fines of $10,000 each. This three-pronged approach aims to mitigate algorithmic bias and discrimination, with the penalties applying to businesses possessing or controlling information on over 25,000 Washington DC residents, data brokers processing personal information and service providers. While the Act has received support from policymakers and academics, industry groups have criticized it as a compliance burden that could result in decreased credit access and higher-cost loans. The Act has set a national precedent that other states may follow.
Canada has proposed the Digital Charter Implementation Act, which would establish three acts for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA). AIDA would set governance and transparency requirements for businesses that use or develop AI in Canada, including requirements relating to bias, transparency, risk mitigation, and record-keeping. The act would also establish the role of an AI & Data Commissioner to monitor compliance and order third-party audits as required. However, it is uncertain if AIDA will pass, and businesses that develop AI for personal information collection or use may have to answer to federal transparency and privacy protection expectations.
October 2022
AI risk management is becoming a global priority due to high-profile instances of harm resulting from the use of artificial intelligence. Several countries, including the US and DC, have proposed legislation and frameworks to regulate the use of AI. Illinois and New York City Council have enacted laws requiring the notification and audit of discriminatory patterns in AI-based employment decisions, while legislation enacted in Colorado prevents insurance providers from using biased algorithms or data to make decisions. The White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights to protect US citizens from potential AI harm. The Blueprint is based on five key pillars, including safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. A handbook, From Principles to Practice, was also published to help implement the framework. AI risk management is important, and Holistic AI can help businesses take command and control over their AI systems.