November 2024

AI and Data Privacy: Key Challenges and Regulations

Generative AI models, particularly large language models (LLMs), pose privacy risks due to their reliance on vast datasets that often include sensitive information, presenting challenges for companies trying to comply with regulations like the European Union's General Data Protection Regulation (GDPR). The GDPR and other regulations emphasize responsible data use in AI, with specific rules for handling personally identifiable information (PII) and provisions for data minimization and privacy in AI-generated content. As privacy regulations multiply worldwide, companies must navigate this complex landscape carefully to avoid potential fines and compliance issues. Clear and comprehensive privacy policies, encryption, anonymization, regular audits for compliance, and AI ethics frameworks are essential. Evolving regulations seek to ensure responsible data governance and risk management in AI to address privacy concerns, potential biases, and unforeseen impacts on individuals. With the rapidly changing landscape of AI, it is crucial for organizations to prepare properly to avoid serious consequences.

August 2024

How effective is watermarking for AI-generated content?

Regulators and policymakers are facing challenges posed by AI-generated content, such as deepfakes creating non-consensual imagery and bots spreading disinformation. To differentiate between synthetic and human-generated content, various approaches are being developed, including AI watermarking, content provenance, retrieval-based detectors, and post-hoc detectors. AI watermarking, in particular, has gained attention, but it lacks standardization and raises privacy concerns. Different jurisdictions are tackling the issue differently, with the USA mandating watermarks on AI-generated material and the EU imposing mandatory disclosures, while China and Singapore require prominent marking and technical solutions like watermarking. Holistic AI offers technical assessments that can help organizations stay ahead of regulatory changes.

April 2024

AI and Elections: Policy Makers Crack Down

The increase of AI technology in the election process has raised concerns about the potential use of misinformation and deepfakes to manipulate public opinion. Governments and tech companies have taken measures to prevent the spread of AI-generated content, including passing laws requiring disclaimers for AI-generated political advertisements and implementing guidelines for tech platforms to mitigate risks related to elections. However, the efficacy of these measures remains uncertain. Tech giants have also joined forces to combat AI-generated election disinformation, but their agreement lacks binding requirements. Clear disclosures and watermarking are potential safeguards in the ongoing struggle against AI-driven misinformation.

March 2024

Balancing Creativity and Regulation: The EU AI Act’s Impact on Generative AI

Generative AI is a rapidly expanding field of AI technology that involves creating new content (such as images, text, audio, or other forms of synthetic content) using large datasets and complex algorithms. However, with the enactment of the EU AI Act, generative AI developers are now subject to strict regulatory scrutiny that imposes transparency obligations and additional requirements for high-risk or general-purpose AI models. These obligations include labeling artificially generated content, disclosing deep fake and AI-generated text content, informing natural persons of the existence of AI systems, and complying with copyright laws. Generative AI developers must carefully evaluate and adapt to these requirements to maximize compliance with the EU AI Act.

February 2024

Operationalising Safety in Generative AI: Model Evaluations and Algorithm Audits

Ensuring the integrity, safety, security, and reliability of generative AI models is crucial for organizations developing and deploying them. Two important processes to achieve this are model evaluations and algorithm audits. While model evaluations assess a model's efficacy across various parameters, such as performance levels and risks, algorithm audits involve independent third-party assessments of reliability, risk detection, and regulatory compliance. Both processes should be used jointly to build the evidence base of a model's safety and risk mitigation capabilities. As regulatory momentum to legislate generative models accelerates, companies must proactively ensure they fulfill their obligations. Holistic AI's LLM Auditing product is a solution that can help identify and address issues such as blocking serious risks, detecting hallucinations and stereotypes, preventing offensive language and toxicity, and providing readability scores.