July 2023

What is Ethical AI

Ethical AI involves the development and deployment of artificial intelligence systems that emphasize fairness, transparency, accountability, and respect for human values. The aim is to promote safe and responsible AI use, mitigate AI's novel risks, and prevent harm. The main verticals of ethical AI are bias, explainability, robustness, and privacy. Ethical AI is important because AI introduces novel risks, but ethical principles can safeguard against harms. Companies will soon be legally required to incorporate ethical AI into their work, and steps can always be taken to make AI more ethical. Holistic AI is a thought leader in AI ethics, offering expertise in computer science, AI policy and governance, algorithm assurance and auditing.

June 2023

Responsible AI: 7 Best Practices

Responsible AI is the practice of developing and deploying AI in a fair, ethical, and transparent way to ensure that AI aligns with human values and does not harm individuals or society. Holistic AI identifies five pillars of responsible AI, including data governance, stakeholder communication, engagement at the board-level & collaboration, developing human-centred AI, complying with relevant regulation, explainable AI, and taking steps towards external assurance. By employing a responsible AI approach, businesses can seamlessly integrate AI into their operations while reducing potential harms. This is becoming increasingly important as the use cases and normalization of AI in everyday life and industry have simultaneously resulted in increased regulation and consequences to future-proof and protect consumers from the potential harm of unchecked adoption of AI.

How to Prepare for the EU AI Act

The EU AI Act is a landmark piece of legislation that will comprehensively regulate AI in the European Union. The Act has a risk-based approach and grades AI systems according to four levels of risk. Businesses have around two-and-a-half years to prepare for the Act before it is enforced in 2026. Entities covered by the Act must prepare, including providers of AI systems, deployers, and those located in third countries. To prepare for the Act, organisations need to create an inventory of their AI systems, develop governance procedures and guidelines, educate their employees, invest in expertise and talent acquisition, and invest in the necessary technologies and infrastructure. Holistic AI offers a comprehensive solution for preparing for the EU AI Act.

Horizon Scan: The Key HR Tech Laws You Need to Know in the US

Governments worldwide are implementing laws to regulate the use of AI and other automated systems in the HR Tech sector. In the US, new laws are being proposed and implemented at the federal, state, and local levels to address bias and discrimination and increase transparency in employment decisions. Existing laws also apply to these technologies, adding additional requirements for HR Tech companies to comply with. The regulatory landscape is rapidly evolving, making it crucial for companies to stay up-to-date with the latest laws to avoid legal issues.

California’s AB 331 Automated Decision Tools Bill: 10 Things You Need to Know

California has proposed legislation to limit workplace monitoring and address the use of automated-decision systems to make AI safer and fairer. The latest initiative, AB-331, seeks to regulate tools that contribute to algorithmic discrimination by prohibiting the use of automated decision tools that disfavor individuals based on their protected classification. Deployers must annually perform an impact assessment of ADT tools used to make consequential decisions and notify individuals affected by the decision of the use of ADT. Non-compliance may result in an administrative fine of up to $10,000 per violation or civil action. Exemptions apply to developers with fewer than 25 employees or if the ADT tool impacts fewer than 1,000 individuals per year.