July 2023

Assembly Bill A07859: New York’s Steps Towards Transparency in HR Tech

New York is leading the way in regulating HR Tech, with Local Law 144 requiring bias audits of automated employment decision tools (AEDTs) and two assembly bills proposed to increase transparency. Assembly Bill A07859 has similar notification requirements to Local Law 144 and requires employers to notify candidates if an AEDT will be used to evaluate them and provide information about the tool. AB A07859 will come into effect on 1 January of the year following its approval. Holistic AI can help companies prepare for new laws and regulations.

10 Things You Need to Know About the NYC Mandatory Bias Audits

The New York City Council has passed legislation mandating bias audits of automated employment decision tools (AEDTs) to address concerns about discriminatory outcomes. The legislation requires impartial evaluations of AEDTs by independent auditors, assessing for disparate impact against protected characteristics such as race and gender. Employers using AEDTs must inform candidates of the tool's use, provide a summary of bias audits, and disclose the characteristics and data used to make judgments. Penalties for noncompliance can reach $1500. This law applies to employers and employment agencies using AEDTs to evaluate candidates or employees who reside in New York City. It will be enforced from 5 July 2023.

Recommendation Systems: Ethical Challenges and the Regulatory Landscape

Recommendation systems are algorithms that suggest content or products to users based on their preferences, leveraging vast amounts of user data. They use techniques such as collaborative-filtering, content-based filtering and hybrid-filtering to rank and suggest relevant items. However, unchecked systems may pose privacy risks and algorithmic biases, compromising user autonomy and agency. Certain legal actions have been taken against recommendation algorithms, and regulatory efforts in Europe and the United States aim to ensure transparency, risk assessment, and user control in these systems. At Holistic AI, a comprehensive approach to AI governance, risk, and compliance is followed, prioritising AI systems that embed ethical principles.

The UK’s AI Regulation: From Guidance to Strategies

Despite emerging regulation on artificial intelligence (AI) around the world, the UK government has yet to propose any AI-specific regulation. Instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. These include an AI auditing framework, guidance on explaining decisions made with AI, guidance on AI and data protection, a national data strategy, and a national AI strategy. The latest developments include a pro-innovation approach to regulating AI, the publication of an AI action plan, and the launch of the Centre for Data Ethics and Innovation’s portfolio of AI assurance techniques. The UK government's proposals aim to cement the UK's role as an AI superpower over the next 10 years by investing in infrastructure and education, and adopting a dynamic and adaptable approach to regulation.

What is Ethical AI

Ethical AI involves the development and deployment of artificial intelligence systems that emphasize fairness, transparency, accountability, and respect for human values. The aim is to promote safe and responsible AI use, mitigate AI's novel risks, and prevent harm. The main verticals of ethical AI are bias, explainability, robustness, and privacy. Ethical AI is important because AI introduces novel risks, but ethical principles can safeguard against harms. Companies will soon be legally required to incorporate ethical AI into their work, and steps can always be taken to make AI more ethical. Holistic AI is a thought leader in AI ethics, offering expertise in computer science, AI policy and governance, algorithm assurance and auditing.