July 2023
The development and establishment of artificial intelligence (AI) standards has become a pressing necessity as the ecosystem of AI rapidly evolves. Standards act as common guidelines, principles and technical specifications for the development, deployment and governance of AI systems. Technical standards in AI governance encompass foundational, process, measurement, and performance standards. Adopting standards enables organizations to benchmark, audit, and assess AI systems, ensuring conformity and performance evaluation, benefiting developers, consumers, and data subjects impacted by AI technologies. Standards bodies, such as the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC), facilitate the development of consensus-driven standards through multi-stakeholder deliberations, promoting global and regional harmonisation.
March 2023
Governments and public sector entities are increasingly using artificial intelligence (AI) to automate tasks, from virtual assistants to defense activities. However, there are risks associated with AI use, and steps must be taken to reduce these risks and promote safe and trustworthy use. Policymakers worldwide are proposing regulations to make AI systems safer, targeting both AI applications by businesses and government use of AI. The US, UK, and EU have taken different approaches to regulating AI in the public sector, with efforts ranging from guidelines to laws. These include the Algorithm Registers in the Netherlands, the UK's guidelines for AI procurement, and the US's AI Training Act. Compliance with these requirements and principles is necessary for governments and businesses when deploying or procuring AI systems.