March 2023
The Society for Human Resource Management (SHRM) and the Society for Industrial and Organizational Psychology (SIOP) held an event discussing the legal and practical implications of using AI-based assessments in hiring. The panel discussed guidelines on how to evaluate and implement AI-based tools for recruitment and legal and ethical implications of using AI-based assessments in hiring practices. Key themes that emerged were compliance with Federal EEO laws, practical challenges in using AI-based assessments, and challenges in complying with the Uniform Guidelines on Employee Selection Procedures. The use of AI and other automated and algorithmic tools in recruitment will soon be even more strictly regulated than traditional hiring practices, with policymakers across the US and EU introducing legislation that will have important implications for employers across the world using these tools.
23 Mar 2023
Spain is actively regulating AI through various initiatives, including launching the first regulatory sandbox for the EU AI Act to create a controlled environment for experimenting with AI obligations, publishing a National AI Strategy, establishing Europe's first AI Supervisory Agency, and passing a Rider Law to give delivery riders employment rights. The Spanish government is investing in these regulatory efforts and has set specific objectives to reduce social inequality and promote innovation while protecting individual and collective rights. These regulations aim to increase transparency and accountability for algorithmic systems and ensure compliance with upcoming AI legislation.
Artificial Intelligence (AI) is projected to increase global GDP by $15.7 trillion by 2030, but with great power comes responsibility. Responsible AI is an emerging area of AI governance that covers ethics, morals, and legal values in the development and deployment of beneficial AI. However, the growing interest in AI has been accompanied by concerns over unintended consequences and risks, such as biased outcomes and poor decision-making. Governments worldwide are tightening regulations to target AI, and businesses will need to comply with global AI regulations and take a more responsible approach to remain competitive and avoid liability. Ensuring responsibility in AI helps assure that an AI system will be efficient, operate according to ethical standards, and prevent potential reputational and financial damage down the road.
The use of artificial intelligence (AI) in high-stakes applications has raised concerns about the risks associated with it. AI algorithms can introduce novel sources of harm, which can amplify and perpetuate issues such as bias. There have been several controversies around the misuse of AI, affecting different sectors. These include the Northpointe COMPAS tool's flawed prediction of criminal reoffending by Black defendants in the legal system and Amazon's scrapped resume screening tool being biased against female applicants. The importance of a risk management framework and explainable algorithms is highlighted. Upcoming laws will soon require companies to ensure they minimize the risks of their AI and use it safely.
OpenAI has launched GPT-4, its latest iteration of a conversational AI that can process both text and image-based prompts. However, its outputs will remain text-based for now. Despite implementing ethical safeguards, the AI has come under fire for biases and factual inconsistencies. Legal issues arise over who owns the content generated by AI models and who is responsible for their outputs. Due to restrictions on sharing personal data, businesses must take extra precautions when integrating similar models into their products. Users should keep in mind the limitations and potential dangers of these tools and not rely completely on their outputs.