November 2022

Spain's Rider Law: Algorithmic Transparency and Worker Rights

Spain has launched the first regulatory sandbox to test new rules for artificial intelligence (AI) and algorithmic systems as part of the EU AI act. The country has also introduced a rider-law, Royal Decree-Law 9/2021, to safeguard employment rights of delivery workers who work through digital platforms. The law includes a presumption of employment for riders who work under a digital platform algorithm, giving them additional job security and safety provisions. Employers are required to inform works councils of workers' parameters, rules, and instructions based on AI systems used as part of the platform. The Ministry of Labour has also published guidelines for complying with algorithmic transparency laws, including the GDPR where necessary.

Regulating AI: The EU AI Act vs California’s Employment Legislation

AI regulation is growing significantly, with governments considering AI regulations, policies, and strategies to manage AI risks and harms. The EU has proposed the AI Act, which takes a risk-based approach and establishes four categories of AI systems based on potential harms. California has proposed regulations to extend non-discriminatory practices to automated decision systems and to regulate the day-to-day use of automated tools in the workplace. Both acts require ongoing monitoring and re-evaluation when significant changes are made to the system. The EU AI Act is more expansive and takes a sector-agnostic approach, while California's proposed laws are narrowly focused mainly on automated employment decision tools. Holistic AI offers a risk management platform to help enterprises identify risks and recommend steps to mitigate them. This blog article is not intended to provide legal advice or a legal opinion.

Department of Consumer and Worker Protection’s Public Hearing on the NYC Bias Audit Law (Local Law 144): Key Takeaways 

The Department of Consumer and Worker Protection (DCWP) in New York City held a public hearing on proposed rules for the NYC Bias Audit Legislation. Attendees were given the opportunity to testify, and many called for independent third-party audits to ensure impartiality and compliance with the legislation. There were concerns about the use of impact ratios as a metric for bias in small sample sizes, and more clarification is needed on notice requirements and the role of vendors of automated employment decision tools (AEDTs). Despite overwhelming support for the legislation, there is a need for additional legislation to hold developers and employers accountable for their actions and ensure the safety of AI systems. Holistic AI can help businesses identify risks, establish a risk management framework, and comply with relevant legislation.

October 2022

10 Things You Need to Know About the California Workplace Technology Accountability Act

The proposed California Workplace Technology Accountability Act (AB-1651) aims to increase accountability surrounding the use of technology in the workplace and reduce potential harm. The Act restricts the data that can be collected about workers to only proven business activities, gives workers access to their data, and requires data protection and algorithmic impact assessments. The Act defines automated decision systems, outlines worker rights concerning their data, and sets notification requirements for data collection and electronic monitoring. The Act also outlines impact assessment requirements and consultation processes for workers potentially affected by automated decision tools. The Act applies to employers in California using technology to make employment-related decisions about workers or collect data about them, as well as vendors acting on behalf of employers.

How to Manage the Risk of AI Bias in Identity Verification

The increasing use of remote identity verification (IDV) technology has created new risks and ethical implications, including barriers to participation in banking and time-critical products such as access to credit. Machine learning (ML) models enable IDV by extracting relevant data from the identity document and validating the original document, then performing facial verification between the photo presented in the identity document and the selfie taken within the IDV app. However, the quality of the datasets used to train the ML models can lead to algorithmic bias and inaccuracies, which can result in individuals being unfairly treated. Managing the potential risks of AI bias in IDV requires technical assessment of the AI system’s code and data, independent auditing, testing, and review against bias metrics, and establishing policies and processes to govern the use of AI.