May 2023

HR Tech Regulations: New York City vs California’s Approaches to Regulating Bias and Discrimination

Policymakers in the US are starting to prioritize the regulation of automated employment decision tools and systems, with Illinois enacting the Artificial Intelligence Video Interview Act in 2020. New York City has passed legislation mandating bias audits of such tools and California has proposed amendments and new laws to regulate their use. The New York City legislation requires independent, impartial bias audits of automated tools used in hiring, assessment and promotion, as well as notification to candidates and employees of their use. California focuses on making it unlawful to use automated tools that discriminate on the basis of protected characteristics, and proposes restrictions on the electronic monitoring of employees. All legislation has strict notification, collection and data retention requirements for employers and vendors. Employers and vendors using AI employment tools are advised to adopt reliable systems of governance and auditing to avoid discrimination in their use and stay ahead of emerging regulations.

Unveiling the Power, Challenges, and Impact of Large Language Models

Large Language Models (LLMs) have come a long way, with pre-trained artificial intelligence models serving as a foundational base for a wide variety of applications and tasks. LLMs have immense potential, but it is essential to remain aware of equity, fairness, and ethical issues they may present, as well as the limitations they must overcome to develop Artificial General Intelligence (AGI). The widespread adoption of LLMs needs to be balanced with addressing potential risks to society and humanity. Data is becoming a significant constraint for optimal LLM performance, necessitating innovative approaches to balance model size and training tokens. Innovative approaches are currently addressing the limitations of current language models, providing alternative solutions to data constraints. Together, these strategies offer a promising path to overcome data constraints and improve the effectiveness and versatility of language models.

April 2023

How to Make Artificial Intelligence Safer

There has been a discussion about the need for a pause in using generative artificial intelligence but it is not practical as AI models are already used in various aspects of daily life. It is essential to approach AI with a nuanced understanding of potential benefits and risks, and prioritize responsible and ethical practices. Fairness, bias mitigation, model transparency, robustness, and privacy are crucial elements that increase trust in AI systems and contribute to a more trustworthy ecosystem. Consumers value companies that adopt responsible AI policies, making it essential for companies to prioritize responsible and ethical AI practices to enhance their brand reputation and contribute to a more trustworthy AI ecosystem. Continued research and collaboration between researchers, policymakers, and stakeholders are necessary to create more responsible and transparent AI systems, address potential risks, and ensure that AI is developed and deployed ethically.

The EEOC Releases a Joint Statement on AI and Automated Systems

The Equal Employment Opportunity Commission (EEOC) has joined forces with the Consumer Financial Protection Bureau (CFPB), the Department of Justice's Civil Rights Division (DOJ), and the Federal Trade Commission (FTC) to issue a joint statement on the use of artificial intelligence (AI) and automated systems. The statement emphasizes the need to ensure that the use of AI and automated systems does not violate federal laws related to fairness, equality, and justice. The EEOC has also launched an AI and algorithmic fairness initiative, published guidance on AI-driven assessments and drafted a strategic enforcement plan for 2023-2027. The statement warns about the risk of discriminatory outcomes resulting from automated systems trained on biased, imbalanced, or erroneous data or without considering the social context.

 Massachusetts HD 3051: An Act Preventing a Dystopian Work Environment

The Massachusetts HD 3051 bill regulates four separate systems, including automated decision systems (ADS), worker information systems (WIS), productivity systems, and electronic monitoring. The bill applies to employers who collect worker data, use electronic monitoring, or use ADS tools to make employment-related decisions. Employers are required to provide notice of data collection and electronic monitoring activities, as well as conduct algorithmic or data protection impact assessments. Workers have rights concerning their data, including requests for information, the right to correct inaccurate information, and the right to access their data. The need for transparency in AI is growing as its use becomes more prevalent in the workplace. Businesses must take early action to comply with legal requirements and ensure responsible use of algorithms.