December 2023

Ethical AI in Africa: An overview

There is a growing focus on responsible AI practices and reducing harm around the world, with the EU and US leading the way. African countries are also developing similar initiatives, including Mauritius' AI strategy which calls for the development of a regulatory framework and ethical AI guidelines. Kenya has published a report recommending policy and regulatory initiatives to promote ethical AI, while South Africa's report outlines key AI policy considerations and calls for the establishment of an AI institute to promote innovation. Finally, Rwanda's National AI Policy places a strong emphasis on ethical AI guidelines and calls for the establishment of a Responsible AI Office to coordinate the implementation of the policy. Prioritizing responsible AI can not only reduce risk and legal liability but also create a competitive advantage.

Ground Zero Unfolds: The Landmark Provisional Agreement on the EU AI Act

European co-legislators have agreed a provisional agreement on the EU AI Act which seeks to harmonise AI regulations across the EU. The agreement balances innovation with fundamental rights and safety and positions the EU as a leader in digital regulation. The EU AI Act includes prohibitions on biometric identification, updated requirements for high-risk AI systems, and a two-tiered approach for general purpose AI systems. It is expected to gradually enter into force with provisions on prohibitions coming into effect within six months and provisions on transparency and governance coming into effect after twelve months. The EU AI Act will likely set a global benchmark for the ethical and responsible design, development, and deployment of AI systems.

Responsible AI initatives in the Middle East: A roundup of activities

Countries in the Middle East have launched various initiatives to promote ethical and responsible AI, but have yet to introduce binding AI-specific regulation or legislation. Tunisia's National AI Strategy is still under development, while the UAE National Strategy for AI 2031 aims to set international benchmarks for ethical and responsible AI and the Smart Dubai AI Ethics Guidelines provide a roadmap for ethical AI practices. Qatar's National AI strategy focuses on AI adoption and innovation, while Lebanon emphasizes investment in R&D and digital skills. Saudi Arabia's National Strategy for Data & AI calls for education and research, and Turkey's National AI Strategy 2021-2025 emphasizes ethical AI development, human rights, and international collaborations. Egypt's National Artificial Intelligence Strategy aims for a balanced approach to harnessing AI for economic growth while mitigating social risks, and Cyprus emphasizes cooperation to maximize investments in AI. While Middle Eastern countries have yet to introduce specific AI laws, increasing efforts worldwide to regulate AI will soon have a global effect.

U.S. Department of Homeland Security and UK National Cyber Security Centre Guidelines on Secure AI

The US and UK have jointly published guidelines for secure AI system development in cybersecurity, which aim to reduce AI cybersecurity risks in terms of secure design, secure development, secure deployment, and secure operation and maintenance. The guidelines highlight the risk of adversarial machine learning, and focus specifically on systems that could result in significant physical or reputational damage or disrupt business operations or leak sensitive or confidential information. They encourage secure by design principles and promote the sharing of best practices. The joint publication is a step towards global cooperation, but it will take more than non-binding guidance to have a real global impact. Policymakers, lawmakers, and regulators are taking the risks of AI seriously, and AI risk management is now a competitive necessity.

November 2023

California’s Privacy Protection Agency Releases Draft Rules for Automated Decision Technologies

California's Privacy Protection Agency has released draft regulations on the use of Automated Decision-making Technologies (ADTs), defining them as any system, software or process that processes personal information and uses computation to make or execute decisions or facilitate human decision-making. Under the proposed rules, consumers have the right to access information on the technologies employed and the methodologies by which decisions were developed, while businesses must disclose the usage of personal information in ADTs to consumers and provide opt-out modalities. The move is part of California's wider effort to regulate the use of AI within the State.