February 2023
California State Senator Bill Dodd introduced Senate Bill 313 to regulate the use of AI in California. The Bill aims to establish the Office of Artificial Intelligence within the Department of Technology to guide the design and deployment of automated systems by state agencies, ensuring compliance with state and federal regulations and minimizing bias. It also prioritizes fairness, transparency, and accountability to prevent discrimination and protect privacy and civil liberties. The Bill lacks specific actions and enforcement guidelines, but future amendments will likely address this. Holistic AI offers compliance services for AI regulations.
The Equal Employment Opportunity Commission (EEOC) has published a draft Strategic Enforcement Plan for 2023-2027, which focuses on the use of algorithms and artificial intelligence (AI) in hiring and how they may lead to employment discrimination. The EEOC recently held a public hearing which explored the implications of AI in employment decisions for US employees and job candidates. Key takeaways from the hearing included concerns with the four-fifths rule as a metric for determining adverse impact, the importance of auditing to mitigate potential biases, and the need to update the scope of Title VII liability to align with technological advancements. Employers and vendors should be aware of and manage risks associated with the use of AI for employee recruitment and selection in light of inevitable enforcement actions.
06 Feb 2023
The increasing integration of artificial intelligence (AI) in various aspects of our lives requires transparency around the data the systems use to generate outputs, and that the decisions made are explainable and their implications communicated to relevant stakeholders. AI transparency comprises three levels: explainability of the technical components, governance of the system, and transparency of impact. The goal of AI transparency is to establish an ecosystem of trust around the use of AI, particularly among citizens or users of systems, and especially in communities that are at the most risk of harm by AI systems. AI transparency and explainability can build trust in AI systems, give individuals more agency over their decisions, and have several business benefits.
The National Institute of Standards and Technology (NIST) has launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0), which is designed to help organizations ‘prevent, detect, mitigate, and manage AI risks’. The AI RMF is designed to promote the adoption of trustworthy AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, secure & resilient, and explainable & interpretable. The framework is based around four key functions: map, measure, manage, and govern. NIST recommends that the AI RMF be applied at the beginning of the AI lifecycle and involve diverse groups of stakeholders. The focus is on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment, and impact of AI systems. The end goal is to improve public trustworthiness of AI and address negative impacts such as societal biases, discrimination, and inequality.
January 2023
Speech recognition technology has many applications, but bias can lead to poor performance for certain groups, such as non-native speakers, older adults, and people with disabilities. To mitigate bias, it is essential to use diverse training data and continually evaluate and enhance the system's performance on underrepresented groups. To diagnose bias, annotated data is needed, and metrics such as Character Error Rate (CER), Word Error Rate (WER), and Dialect Density Measure (DDM) can be used. Several datasets are available to analyze bias in ASR systems, such as the Speech Accent Archive, ACL Anthology, Santa Barbara Corpus of Spoken American English, Datatang's British English Speech Dataset, and the Artie Bias Corpus.