February 2023
06 Feb 2023
The increasing integration of artificial intelligence (AI) in various aspects of our lives requires transparency around the data the systems use to generate outputs, and that the decisions made are explainable and their implications communicated to relevant stakeholders. AI transparency comprises three levels: explainability of the technical components, governance of the system, and transparency of impact. The goal of AI transparency is to establish an ecosystem of trust around the use of AI, particularly among citizens or users of systems, and especially in communities that are at the most risk of harm by AI systems. AI transparency and explainability can build trust in AI systems, give individuals more agency over their decisions, and have several business benefits.
The National Institute of Standards and Technology (NIST) has launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0), which is designed to help organizations ‘prevent, detect, mitigate, and manage AI risks’. The AI RMF is designed to promote the adoption of trustworthy AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, secure & resilient, and explainable & interpretable. The framework is based around four key functions: map, measure, manage, and govern. NIST recommends that the AI RMF be applied at the beginning of the AI lifecycle and involve diverse groups of stakeholders. The focus is on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment, and impact of AI systems. The end goal is to improve public trustworthiness of AI and address negative impacts such as societal biases, discrimination, and inequality.
January 2023
Speech recognition technology has many applications, but bias can lead to poor performance for certain groups, such as non-native speakers, older adults, and people with disabilities. To mitigate bias, it is essential to use diverse training data and continually evaluate and enhance the system's performance on underrepresented groups. To diagnose bias, annotated data is needed, and metrics such as Character Error Rate (CER), Word Error Rate (WER), and Dialect Density Measure (DDM) can be used. Several datasets are available to analyze bias in ASR systems, such as the Speech Accent Archive, ACL Anthology, Santa Barbara Corpus of Spoken American English, Datatang's British English Speech Dataset, and the Artie Bias Corpus.
The Society for Industrial and Organizational Psychology (SIOP) has released guidelines on the validation and use of AI-based assessments in employee selection. These guidelines are based on five principles, including accurate prediction of job performance, consistent scores, fairness and unbiased scores, appropriate use, and adequate documentation for decision-making. Compliance with these principles requires validation of tools, equitable treatment of groups, identifying and mitigating predictive and measurement bias, and using informed approaches. The guidelines also recommend increasing transparency and fairness in AI-driven assessments, documenting decision-making processes, and complying with bias audits in NYC Local Law 144. This article is informational and not intended to provide legal advice.
The National Institute of Standards and Technology (NIST) has launched its first version of the Artificial Intelligence Risk Management Framework (AI RMF) after 18 months of development. The framework is designed to help organisations prevent, detect, mitigate, and manage AI risks and promote the adoption of trustworthy AI systems. The AI RMF focuses on flexibility, measurement, and trustworthiness and requires organisations to cultivate a risk management culture. NIST anticipates that the feedback received from organisations using the framework will establish global gold standards in line with EU regulations.