March 2023

In critical areas such as healthcare and self-driving cars where AI is being increasingly used, the efficacy of algorithms is crucial. The measurements of efficacy depend on the type of system and its output. Classification systems rely on metrics such as true and false positives and negatives, accuracy, precision, recall, F1 scores, and area under the operating curve. For regression systems, correlations and root mean square error are used to compare outputs with ground truth scores. The choice of metric depends on the context and type of model being used. Holistic AI's open-source library provides built-in metrics for measuring model performance.

The Society for Human Resource Management (SHRM) and the Society for Industrial and Organizational Psychology (SIOP) held an event discussing the legal and practical implications of using AI-based assessments in hiring. The panel discussed guidelines on how to evaluate and implement AI-based tools for recruitment and legal and ethical implications of using AI-based assessments in hiring practices. Key themes that emerged were compliance with Federal EEO laws, practical challenges in using AI-based assessments, and challenges in complying with the Uniform Guidelines on Employee Selection Procedures. The use of AI and other automated and algorithmic tools in recruitment will soon be even more strictly regulated than traditional hiring practices, with policymakers across the US and EU introducing legislation that will have important implications for employers across the world using these tools.

23 Mar 2023
Spain is actively regulating AI through various initiatives, including launching the first regulatory sandbox for the EU AI Act to create a controlled environment for experimenting with AI obligations, publishing a National AI Strategy, establishing Europe's first AI Supervisory Agency, and passing a Rider Law to give delivery riders employment rights. The Spanish government is investing in these regulatory efforts and has set specific objectives to reduce social inequality and promote innovation while protecting individual and collective rights. These regulations aim to increase transparency and accountability for algorithmic systems and ensure compliance with upcoming AI legislation.

Artificial Intelligence (AI) is projected to increase global GDP by $15.7 trillion by 2030, but with great power comes responsibility. Responsible AI is an emerging area of AI governance that covers ethics, morals, and legal values in the development and deployment of beneficial AI. However, the growing interest in AI has been accompanied by concerns over unintended consequences and risks, such as biased outcomes and poor decision-making. Governments worldwide are tightening regulations to target AI, and businesses will need to comply with global AI regulations and take a more responsible approach to remain competitive and avoid liability. Ensuring responsibility in AI helps assure that an AI system will be efficient, operate according to ethical standards, and prevent potential reputational and financial damage down the road.

The use of artificial intelligence (AI) in high-stakes applications has raised concerns about the risks associated with it. AI algorithms can introduce novel sources of harm, which can amplify and perpetuate issues such as bias. There have been several controversies around the misuse of AI, affecting different sectors. These include the Northpointe COMPAS tool's flawed prediction of criminal reoffending by Black defendants in the legal system and Amazon's scrapped resume screening tool being biased against female applicants. The importance of a risk management framework and explainable algorithms is highlighted. Upcoming laws will soon require companies to ensure they minimize the risks of their AI and use it safely.