March 2023

The Dangers of ChatGPT: It’s All Fun and Games, Until It’s Not

OpenAI has launched GPT-4, its latest iteration of a conversational AI that can process both text and image-based prompts. However, its outputs will remain text-based for now. Despite implementing ethical safeguards, the AI has come under fire for biases and factual inconsistencies. Legal issues arise over who owns the content generated by AI models and who is responsible for their outputs. Due to restrictions on sharing personal data, businesses must take extra precautions when integrating similar models into their products. Users should keep in mind the limitations and potential dangers of these tools and not rely completely on their outputs.

The US Pushing for Responsible AI in Military Use

The US Department of State has released a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy", outlining 12 best practices for responsible AI in military applications. The declaration emphasizes the importance of using AI in accordance with international law and developing auditable methodologies to avoid unintended consequences and bias. It has been signed by 60 countries, including the US and China. The US Department of Defense adopted its own Ethical Principles for Artificial Intelligence in 2020, with the aim of facilitating the lawful use of AI systems in both combat and non-combat functions. The Department's approach to Responsible Artificial Intelligence largely focuses on supplementing existing laws, regulations, and norms to address novel issues stemming from AI, with a focus on reliability, risk management, and ethics.

SB 1103 – Connecticut’s Call for AI Regulation

Connecticut lawmakers have proposed a bill that would establish an Office of Artificial Intelligence and a government task force to develop an AI Bill of Rights. The bill would require government oversight, mandate inventory and testing of state-used algorithms, close existing data privacy loopholes, and enumerate citizen protections through an AI Bill of Rights. The bill seeks to regulate the use of AI by state agencies and outlines protocols and processes for developing, procuring, and implementing automated decision systems. The Office of Artificial Intelligence would conduct periodic re-evaluations of automated systems to ensure compliance with these protocols. The bill emphasizes the necessity for governments to stay current with AI regulations to safeguard individuals from potential harm.

SHAP Values: An Intersection Between Game Theory and Artificial Intelligence

Explainable AI has become increasingly important due to the need for transparency and security in AI systems. The SHAP model, inspired by game theory, is widely used for interpreting machine learning models. The SHAP algorithm calculates the contribution of each feature to the final prediction made by the model, allowing for local and global explanations. The assumptions of model-agnostic approximation methods simplify the calculation of explanations, but may not hold true for all models. The article demonstrates the interdisciplinary nature of explainable AI and its potential to identify and correct errors and bias, as well as ensure that AI models comply with ethical and legal standards.

AI Regulation in the Public Sector: Regulating Governments’ Use of AI

Governments and public sector entities are increasingly using artificial intelligence (AI) to automate tasks, from virtual assistants to defense activities. However, there are risks associated with AI use, and steps must be taken to reduce these risks and promote safe and trustworthy use. Policymakers worldwide are proposing regulations to make AI systems safer, targeting both AI applications by businesses and government use of AI. The US, UK, and EU have taken different approaches to regulating AI in the public sector, with efforts ranging from guidelines to laws. These include the Algorithm Registers in the Netherlands, the UK's guidelines for AI procurement, and the US's AI Training Act. Compliance with these requirements and principles is necessary for governments and businesses when deploying or procuring AI systems.