June 2023
Governments worldwide are implementing laws to regulate the use of AI and other automated systems in the HR Tech sector. In the US, new laws are being proposed and implemented at the federal, state, and local levels to address bias and discrimination and increase transparency in employment decisions. Existing laws also apply to these technologies, adding additional requirements for HR Tech companies to comply with. The regulatory landscape is rapidly evolving, making it crucial for companies to stay up-to-date with the latest laws to avoid legal issues.
California has proposed legislation to limit workplace monitoring and address the use of automated-decision systems to make AI safer and fairer. The latest initiative, AB-331, seeks to regulate tools that contribute to algorithmic discrimination by prohibiting the use of automated decision tools that disfavor individuals based on their protected classification. Deployers must annually perform an impact assessment of ADT tools used to make consequential decisions and notify individuals affected by the decision of the use of ADT. Non-compliance may result in an administrative fine of up to $10,000 per violation or civil action. Exemptions apply to developers with fewer than 25 employees or if the ADT tool impacts fewer than 1,000 individuals per year.
The article discusses the varying definitions of Artificial Intelligence (AI) and its impact on society, including real-world examples of AI applications in industries such as healthcare and finance. It highlights the benefits and risks of AI, including ethical concerns about job displacement, privacy, and AI-proliferated misinformation. The article also explores the future of AI, including generative AI and the theoretical concept of artificial general intelligence that could one day solve the world's most complex problems. The article stresses the importance of understanding the applications and impact of AI and engaging in ethical discussions around bias and responsible deployment.
The European Parliament has voted to move forward with the EU AI Act, which seeks to lead the world in AI regulation and create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights. The act will have implications for providers, deployers and distributors of AI systems used in the EU. The act takes a risk-based approach to regulating AI, where obligations are proportional to the risk posed by a system based on four risk categories. The act seeks to set the global standard for AI regulation, affecting entities around the world that operate in the EU or interact with the EU market. Businesses must use the preparatory period to build up their readiness and establish robust governance structures, build internal competencies, and implement requisite technologies. Holistic AI can assist organisations to achieve compliance with the EU AI Act through its comprehensive suite of solutions.
California has proposed amendments to their employment regulations to extend non-discrimination practices to automated-decision systems (ADS) to address bias and discrimination in hiring. Employers with five or more employees are subject to the regulation, and vendors acting on behalf of an employer are considered an employer under the regulation. ADSs are defined as computational processes that screen, evaluate, categorize, recommend, or make or facilitate employment-related decisions, with restrictions on using ADSs to screen out applicants based on protected characteristics. The legislation also prohibits the use of medical or psychological exams, including by using an ADS, before an offer is extended to an applicant. Characteristics protected under the proposed amendments include race, national origin, gender, and age, unless they are shown to be job-related for the position in question. The most recent updates to the regulation extend record retention requirements, and companies are encouraged to conduct audits of their automated-decision systems to identify bias and reduce harm and legal risks.