contributor portait

Ayesha Gulley

United States
Policy Product Manager
Holistic AI

About

Ayesha Gulley is a Policy Product Manager at Holistic AI. Her research focuses on AI regulation, fairness, and responsible practices. Before joining Holistic AI, Ayesha worked at the Internet Society (ISOC), advising policymakers on the importance of protecting and promoting strong encryption. She holds a Master’s of Public Administration in Technology Policy from University College London and a Bachelor’s in Law from the University of California, Santa Cruz.

Ayesha Gulley 's articles (24)

The AI Act specifies considerations to be made for small and medium-sized enterprises and start-ups. These range from giving them free access to regulatory sandboxes, to being more lenient regarding documentation.

The regulation of artificial intelligence (AI) has started to become an urgent priority, with countries around the world proposing legislation aimed at promoting the responsible and safe application of AI to minimise the harms that it can pose. However, while these initiatives all aim to regulate the same technology, there is some divergence in how these different efforts define AI – getting lost in translation.

In the rapidly evolving ecosystem of artificial intelligence (AI), the development and establishment of AI standards has become a pressing necessity. These seek to serve as a set of common guidelines, principles and technical specifications for the development, deployment and governance of AI systems.

Lawsuits involving the misuse of AI are increasingly being filed, particularly in the US. Many of the harmful impacts of AI are avoidable through appropriate safeguards and AI Risk Management.

First proposed on the 21st of April 2021, the European Commission’s proposed Harmonised Rules on Artificial Intelligence, colloquially known as the EU AI Act, seeks to lead the world in AI regulation. Likely to become the global gold standard for AI regulation, much like the general data protection regulations did for privacy regulation, the rules aim to create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights in the development and deployment of AI.