July 2023
Recommendation systems are algorithms that suggest content or products to users based on their preferences, leveraging vast amounts of user data. They use techniques such as collaborative-filtering, content-based filtering and hybrid-filtering to rank and suggest relevant items. However, unchecked systems may pose privacy risks and algorithmic biases, compromising user autonomy and agency. Certain legal actions have been taken against recommendation algorithms, and regulatory efforts in Europe and the United States aim to ensure transparency, risk assessment, and user control in these systems. At Holistic AI, a comprehensive approach to AI governance, risk, and compliance is followed, prioritising AI systems that embed ethical principles.
Despite emerging regulation on artificial intelligence (AI) around the world, the UK government has yet to propose any AI-specific regulation. Instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. These include an AI auditing framework, guidance on explaining decisions made with AI, guidance on AI and data protection, a national data strategy, and a national AI strategy. The latest developments include a pro-innovation approach to regulating AI, the publication of an AI action plan, and the launch of the Centre for Data Ethics and Innovation’s portfolio of AI assurance techniques. The UK government's proposals aim to cement the UK's role as an AI superpower over the next 10 years by investing in infrastructure and education, and adopting a dynamic and adaptable approach to regulation.
04 Jul 2023
Ethical AI involves the development and deployment of artificial intelligence systems that emphasize fairness, transparency, accountability, and respect for human values. The aim is to promote safe and responsible AI use, mitigate AI's novel risks, and prevent harm. The main verticals of ethical AI are bias, explainability, robustness, and privacy. Ethical AI is important because AI introduces novel risks, but ethical principles can safeguard against harms. Companies will soon be legally required to incorporate ethical AI into their work, and steps can always be taken to make AI more ethical. Holistic AI is a thought leader in AI ethics, offering expertise in computer science, AI policy and governance, algorithm assurance and auditing.
June 2023
30 Jun 2023
Responsible AI is the practice of developing and deploying AI in a fair, ethical, and transparent way to ensure that AI aligns with human values and does not harm individuals or society. Holistic AI identifies five pillars of responsible AI, including data governance, stakeholder communication, engagement at the board-level & collaboration, developing human-centred AI, complying with relevant regulation, explainable AI, and taking steps towards external assurance. By employing a responsible AI approach, businesses can seamlessly integrate AI into their operations while reducing potential harms. This is becoming increasingly important as the use cases and normalization of AI in everyday life and industry have simultaneously resulted in increased regulation and consequences to future-proof and protect consumers from the potential harm of unchecked adoption of AI.
29 Jun 2023
The EU AI Act is a landmark piece of legislation that will comprehensively regulate AI in the European Union. The Act has a risk-based approach and grades AI systems according to four levels of risk. Businesses have around two-and-a-half years to prepare for the Act before it is enforced in 2026. Entities covered by the Act must prepare, including providers of AI systems, deployers, and those located in third countries. To prepare for the Act, organisations need to create an inventory of their AI systems, develop governance procedures and guidelines, educate their employees, invest in expertise and talent acquisition, and invest in the necessary technologies and infrastructure. Holistic AI offers a comprehensive solution for preparing for the EU AI Act.