April 2023
There has been a discussion about the need for a pause in using generative artificial intelligence but it is not practical as AI models are already used in various aspects of daily life. It is essential to approach AI with a nuanced understanding of potential benefits and risks, and prioritize responsible and ethical practices. Fairness, bias mitigation, model transparency, robustness, and privacy are crucial elements that increase trust in AI systems and contribute to a more trustworthy ecosystem. Consumers value companies that adopt responsible AI policies, making it essential for companies to prioritize responsible and ethical AI practices to enhance their brand reputation and contribute to a more trustworthy AI ecosystem. Continued research and collaboration between researchers, policymakers, and stakeholders are necessary to create more responsible and transparent AI systems, address potential risks, and ensure that AI is developed and deployed ethically.
The Equal Employment Opportunity Commission (EEOC) has joined forces with the Consumer Financial Protection Bureau (CFPB), the Department of Justice's Civil Rights Division (DOJ), and the Federal Trade Commission (FTC) to issue a joint statement on the use of artificial intelligence (AI) and automated systems. The statement emphasizes the need to ensure that the use of AI and automated systems does not violate federal laws related to fairness, equality, and justice. The EEOC has also launched an AI and algorithmic fairness initiative, published guidance on AI-driven assessments and drafted a strategic enforcement plan for 2023-2027. The statement warns about the risk of discriminatory outcomes resulting from automated systems trained on biased, imbalanced, or erroneous data or without considering the social context.
The Massachusetts HD 3051 bill regulates four separate systems, including automated decision systems (ADS), worker information systems (WIS), productivity systems, and electronic monitoring. The bill applies to employers who collect worker data, use electronic monitoring, or use ADS tools to make employment-related decisions. Employers are required to provide notice of data collection and electronic monitoring activities, as well as conduct algorithmic or data protection impact assessments. Workers have rights concerning their data, including requests for information, the right to correct inaccurate information, and the right to access their data. The need for transparency in AI is growing as its use becomes more prevalent in the workplace. Businesses must take early action to comply with legal requirements and ensure responsible use of algorithms.
Algorithms are increasingly being used in social media platforms for various purposes such as recommendations and amplifying movements, but they can also be used as vectors of harm. The misuse of generative AI to create deepfakes, voice clones, and synthetic media can lead to misleading content, and algorithmic overdependence can create filter bubbles and echo chambers, affecting marginalized communities. Governments are taking measures to mitigate these harms through regulations, such as the EU AI Act, Digital Services Act, and legislation in the US. Lawsuits against social media platforms for algorithmic harms are also being seen, potentially setting a precedent for holding them liable. The article emphasizes the need for trustworthy AI systems that are developed with ethics and harm mitigation in mind.
New York City has enacted Local Law 144, regulating automated employment decision tools (AEDTs) used to evaluate applicants or employees. The law requires yearly bias audits to assess the tool's disparate impact on marginalized groups. Employers and employment agencies must also comply with notification requirements and provide notice of the use of the tool. Penalties for non-compliance start at $500 for the first violation. New Jersey has proposed a similar bill, and the New York State Assembly has also introduced legislation requiring annual bias audits. Holistic AI recommends taking steps early to ensure compliance ahead of the laws coming into effect.