March 2023
Artificial Intelligence (AI) is projected to increase global GDP by $15.7 trillion by 2030, but with great power comes responsibility. Responsible AI is an emerging area of AI governance that covers ethics, morals, and legal values in the development and deployment of beneficial AI. However, the growing interest in AI has been accompanied by concerns over unintended consequences and risks, such as biased outcomes and poor decision-making. Governments worldwide are tightening regulations to target AI, and businesses will need to comply with global AI regulations and take a more responsible approach to remain competitive and avoid liability. Ensuring responsibility in AI helps assure that an AI system will be efficient, operate according to ethical standards, and prevent potential reputational and financial damage down the road.
The use of artificial intelligence (AI) in high-stakes applications has raised concerns about the risks associated with it. AI algorithms can introduce novel sources of harm, which can amplify and perpetuate issues such as bias. There have been several controversies around the misuse of AI, affecting different sectors. These include the Northpointe COMPAS tool's flawed prediction of criminal reoffending by Black defendants in the legal system and Amazon's scrapped resume screening tool being biased against female applicants. The importance of a risk management framework and explainable algorithms is highlighted. Upcoming laws will soon require companies to ensure they minimize the risks of their AI and use it safely.
OpenAI has launched GPT-4, its latest iteration of a conversational AI that can process both text and image-based prompts. However, its outputs will remain text-based for now. Despite implementing ethical safeguards, the AI has come under fire for biases and factual inconsistencies. Legal issues arise over who owns the content generated by AI models and who is responsible for their outputs. Due to restrictions on sharing personal data, businesses must take extra precautions when integrating similar models into their products. Users should keep in mind the limitations and potential dangers of these tools and not rely completely on their outputs.
The US Department of State has released a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy", outlining 12 best practices for responsible AI in military applications. The declaration emphasizes the importance of using AI in accordance with international law and developing auditable methodologies to avoid unintended consequences and bias. It has been signed by 60 countries, including the US and China. The US Department of Defense adopted its own Ethical Principles for Artificial Intelligence in 2020, with the aim of facilitating the lawful use of AI systems in both combat and non-combat functions. The Department's approach to Responsible Artificial Intelligence largely focuses on supplementing existing laws, regulations, and norms to address novel issues stemming from AI, with a focus on reliability, risk management, and ethics.
Connecticut lawmakers have proposed a bill that would establish an Office of Artificial Intelligence and a government task force to develop an AI Bill of Rights. The bill would require government oversight, mandate inventory and testing of state-used algorithms, close existing data privacy loopholes, and enumerate citizen protections through an AI Bill of Rights. The bill seeks to regulate the use of AI by state agencies and outlines protocols and processes for developing, procuring, and implementing automated decision systems. The Office of Artificial Intelligence would conduct periodic re-evaluations of automated systems to ensure compliance with these protocols. The bill emphasizes the necessity for governments to stay current with AI regulations to safeguard individuals from potential harm.