August 2022

Assessing the Impact of Algorithmic Systems

Impact assessments, including Algorithmic Impact Assessments (AIAs) and Data Protection Impact Assessments (DPIAs), can be used to determine the potential risks and harm associated with a system. AIAs are used to identify the risks associated with the use of an automated decision system, while DPIAs are used to assess data management strategies. These assessments are increasingly being used to determine the impact of AI systems on users and stakeholders. EU AI Act requires AIAs to determine significant risks posed by high-risk systems to fundamental rights, health, or safety. A higher impact system is subject to tighter regulations and stricter requirements, and low impact systems are subject to more lenient requirements.

Auditing vs Assurance: What’s the difference?

The rise in AI use has led to concerns about its legal, ethical, and safety implications, resulting in the emergence of AI ethics. Algorithm auditing is an approach to mitigating potential harm that can arise from high-risk AI systems such as those in healthcare, recruitment, and housing. Auditing involves assessing an algorithm's safety, ethics, and legality and consists of five stages of development: data and set up, feature pre-processing, model selection, post-processing and reporting, and production and deployment. Following the audit, assurance is the process of determining whether the AI system conforms to regulatory, governance, and ethical standards, including certification, impact assessments, and insurance. Whilst auditing contributes to assuring an algorithmic system, governance and impact assessments are also essential.

AI Ethics 101

AI ethics is a new field concerned with ensuring that AI is used in an ethical way, and it draws on philosophical principles, computer science practices, and law. The main considerations of AI ethics include human agency, safety, privacy, transparency, fairness, and accountability. There are three major approaches to AI ethics: principles, processes, and ethical consciousness. These include the use of guidelines, legislative standards and norms, ethical by design, governance, and integration of codes of conduct and compliance. AI ethics aims to address concerns raised by the development and deployment of new digital technologies, such as AI, big data analytics, and blockchain technologies.

July 2022

Pro-innovation: The UK’s Framework for AI Regulation

The UK government has proposed a pro-innovation framework for regulating artificial intelligence (AI) that is context-specific and based on the use and impact of the technology. The government plans to give broader direction for key principles relating to transparency, accountability, safety, security and privacy, a mechanism for redress or contestability of AI, and delegate responsibility for developing enforcement strategies to appropriate regulators. The approach aims to provide flexibility for regulators and avoid introducing unnecessary barriers to innovation, with regulators focusing on high-risk applications. The framework welcomes stakeholder views by September 26, 2022, ahead of a white paper with more granular details and implementation plans towards the end of the year.

Regulatory Sandboxes and The EU AI Act

The EU Artificial Intelligence (AI) Act is introducing regulatory sandboxes, which provide an environment for providers to test the compliance of their product before it is launched on the market. The sandboxes offer priority access to small and medium-sized enterprises and encourage member states to develop their own sandboxes. Spain will be piloting a regulatory sandbox aimed at testing the requirements of the legislation and how conformity assessments and post-market activities may be overseen. The pilot is expected to begin in October 2022 and results published by the end of 2023, with other member states potentially joining or developing their own sandboxes. Sandboxes can have benefits to both businesses and regulators but can also have limitations, such as the risk of abuse and potentially delaying innovation.