March 2023
Explainable AI has become increasingly important due to the need for transparency and security in AI systems. The SHAP model, inspired by game theory, is widely used for interpreting machine learning models. The SHAP algorithm calculates the contribution of each feature to the final prediction made by the model, allowing for local and global explanations. The assumptions of model-agnostic approximation methods simplify the calculation of explanations, but may not hold true for all models. The article demonstrates the interdisciplinary nature of explainable AI and its potential to identify and correct errors and bias, as well as ensure that AI models comply with ethical and legal standards.
Governments and public sector entities are increasingly using artificial intelligence (AI) to automate tasks, from virtual assistants to defense activities. However, there are risks associated with AI use, and steps must be taken to reduce these risks and promote safe and trustworthy use. Policymakers worldwide are proposing regulations to make AI systems safer, targeting both AI applications by businesses and government use of AI. The US, UK, and EU have taken different approaches to regulating AI in the public sector, with efforts ranging from guidelines to laws. These include the Algorithm Registers in the Netherlands, the UK's guidelines for AI procurement, and the US's AI Training Act. Compliance with these requirements and principles is necessary for governments and businesses when deploying or procuring AI systems.
The challenge with artificial intelligence (AI) models is the trade-off between accuracy and explainability. The more accurate the results, the less understandable they are to humans. Therefore, explainable AI has gained importance, which refers to the ability to explain the decisions of an AI model in humanly understandable terms. The article discusses the practical and societal dimensions of explainable AI. In the practical dimension, engineers and data scientists need to implement explainable solutions in their models, and end-users must comprehend the AI-generated outcomes to make informed decisions. In the societal dimension, explanations need to be provided to all relevant stakeholders to ensure that AI systems' decisions are fair and ethical. Finally, explainability is crucial for AI risk management, enabling auditors to understand how AI models arrive at their decisions and ensure they are ethical, transparent, and bias-free.
February 2023
Organizations are investing in AI tools and strategies to streamline their processes and gain a competitive edge, but this transition to a reliance on AI comes with heightened risk that must be managed. The National Institute of Standards and Technology (NIST) defines AI risks as potential harms resulting from developing and deploying AI systems. Effective AI governance, risk, and compliance processes enable organizations to identify and manage these risks. AI Risk Management involves identifying, assessing, and managing risks associated with using AI technologies and addressing both technical and non-technical risks. The potential risks and benefits of AI must be understood, and strategies and policies developed to mitigate potential risks. AI regulation is coming, and transparency regarding AI algorithms is a crucial first step. Organizations that implement a risk management framework can move away from a costly, reactive, ad hoc approach to regulation and increase trust and scale with confidence. While AI adoption is soaring, risk management is lagging, and companies need to implement responsible AI programs sooner rather than later to avoid reputational damage and facilitate legal compliance.
AI-based conversational agents such as ChatGPT and Bard have become increasingly popular and are entering our daily lives through browsers and communication platforms. The key to staying ahead is to be aware of the new technology trends, particularly the innovative Transformer architecture of deep learning models that has redefined the way we process natural language text. Language modelling approaches such as Masked Language Modeling (MLM) from BERT and Causal Language Modeling (CLM) from GPT have represented a significant leap forward in NLP technology, but each model has its limitations. To make language models more scalable for commercial solutions, researchers and engineers have sought a new approach such as InstructGPT and LaMDA. These technologies use fine-tuning and reinforcement learning strategies applied to human feedback to meet the user's requests more accurately.