August 2023

Regulating Foundation Models and Generative AI: The EU AI Act Approach

The European Union has updated the EU AI Act with guidelines for regulating foundation models and generative AI, which have the potential for both benefits and harm. While foundation models are multi-purpose and versatile, they could generate dangerous content, biased results, and data breaches. Generative AI could also produce copyright-infringing content and disinformation. The EU AI Act mandates obligations on providers of foundation models and generative AI, requiring risk reduction, data governance, transparency, and cooperation across the AI value chain. The EU AI Act also defines foundation models as AI models developed for versatility and ease of deployment across multiple contexts, and generative AI as AI systems capable of producing complex content with varying autonomy levels.

June 2023

Generative AI: A Regulatory Overview

Generative AI, which can create new outputs based on raw data, is seeing widespread use across various applications but there are concerns about its misuse and the harvesting of personal data without informed consent. Governments worldwide are accelerating efforts to understand and govern these models. The European Union is seeking to establish comprehensive regulatory governance through the AI Act, while the United States is exploring "earned trust" in AI systems and regulation of Generative AI is light-touch in India and the UK. China has issued draft rules to regulate Generative AI, requiring compliance with measures on data governance, bias mitigation, transparency, and content moderation. The key takeaway is that regulations are coming and it is crucial to prioritize the development of ethical AI systems that prioritize fairness and harm mitigation.

March 2023

The Dangers of ChatGPT: It’s All Fun and Games, Until It’s Not

OpenAI has launched GPT-4, its latest iteration of a conversational AI that can process both text and image-based prompts. However, its outputs will remain text-based for now. Despite implementing ethical safeguards, the AI has come under fire for biases and factual inconsistencies. Legal issues arise over who owns the content generated by AI models and who is responsible for their outputs. Due to restrictions on sharing personal data, businesses must take extra precautions when integrating similar models into their products. Users should keep in mind the limitations and potential dangers of these tools and not rely completely on their outputs.

December 2022

We Asked ChatGPT to Write an Article About Ethical AI, Here's What it Said

Ethical AI refers to the safe and responsible use of artificial intelligence (AI). It involves three main approaches: principles, processes, and ethical consciousness. Ethical AI operationalizes AI Ethics, with a focus on four key verticals: safety, privacy, fairness, and transparency. Algorithm auditing is a key practice for determining how well a system performs on each of these verticals. While AI has many applications, such as conversational AI, ethics need to be prioritized to prevent poorly designed systems from being developed. The EU High-Level Expert Group on AI and the IEEE have formulated moral values that should be adhered to in the design and deployment of artificial intelligence. However, regulatory oversight and AI auditing are needed to bridge AI ethics from theory to practice.