contributor portait

Siddhant Chatterjee

United Kingdom
Product Manager, AI Ethical Innovation
Adobe

About

Siddhant is a Product Manager, AI Ethical Innovation at Adobe. He was previously a Policy Manager at Holistic AI, where he helped integrate policy and regulatory requirements into proprietary AI Governance platform. Prior to this, Siddhant was Tiktok's first policy analyst in South Asia. More recently, Siddhant has served as an Advisor on AI Ethics and Disinformation to the Australian Government and the Centre for Data Ethics and Innovation (CDEI) on the algorithmic ethics of climate technologies. Additionally, Siddhant consults the Government of India on AI and Online Safety regulations. He is a member of the Internet Society's Working Groups on digital literacy and accessibility. Siddhant holds a Master's in Technology Policy from UCL and a Bachelor's in Economics.

Siddhant Chatterjee's articles (22)

The EU AI Act takes a risk-based approach to AI regulation, where AI systems that are considered high-risk must undergo conformity assessments to demonstrate compliants with the associated requirements.

AI governance is the many technical and non-technical guardrails and tools that make AI safer, secure, and more ethical.

The NIST AI RMF is operationalised through a combination of five tools or elements, which help establish the principles a trustworthy AI system should have, what actions should be taken to ensure trustworthiness in an AI system’s development and deployment lifecycles, as well as practical guidance on doing the same.

The European Commission aims to lead the world in Artificial Intelligence (AI) regulation with the proposed EU AI Act. This article explores the proposed penalties of the EU AI Act for organisations that are non-compliant with the Act.

Two crucial processes that contribute to the assurance of generative AI are model evaluations and algorithm audits, which each serve a unique purpose in the journey towards responsible AI deploymen