September 2023
The U.S. Senate Subcommittee on Privacy, Technology, and the Law held a hearing titled "Oversight of AI: Legislating on Artificial Intelligence" to discuss the need for regulation of AI. Senators Blumenthal and Hawley announced a bipartisan legislative framework to address five key areas: establishing a licensing regime, legal accountability for harms caused by AI, defending national security and international competition, promoting transparency, and protecting consumers and kids. The hearing also addressed the need for effective enforcement, international coordination, and protecting against election interference, surveillance, and job displacement. Compliance requirements for companies using AI are expected to evolve with the new AI regulations.
August 2023
The European Union has updated the EU AI Act with guidelines for regulating foundation models and generative AI, which have the potential for both benefits and harm. While foundation models are multi-purpose and versatile, they could generate dangerous content, biased results, and data breaches. Generative AI could also produce copyright-infringing content and disinformation. The EU AI Act mandates obligations on providers of foundation models and generative AI, requiring risk reduction, data governance, transparency, and cooperation across the AI value chain. The EU AI Act also defines foundation models as AI models developed for versatility and ease of deployment across multiple contexts, and generative AI as AI systems capable of producing complex content with varying autonomy levels.
June 2023
Generative AI, which can create new outputs based on raw data, is seeing widespread use across various applications but there are concerns about its misuse and the harvesting of personal data without informed consent. Governments worldwide are accelerating efforts to understand and govern these models. The European Union is seeking to establish comprehensive regulatory governance through the AI Act, while the United States is exploring "earned trust" in AI systems and regulation of Generative AI is light-touch in India and the UK. China has issued draft rules to regulate Generative AI, requiring compliance with measures on data governance, bias mitigation, transparency, and content moderation. The key takeaway is that regulations are coming and it is crucial to prioritize the development of ethical AI systems that prioritize fairness and harm mitigation.
March 2023
OpenAI has launched GPT-4, its latest iteration of a conversational AI that can process both text and image-based prompts. However, its outputs will remain text-based for now. Despite implementing ethical safeguards, the AI has come under fire for biases and factual inconsistencies. Legal issues arise over who owns the content generated by AI models and who is responsible for their outputs. Due to restrictions on sharing personal data, businesses must take extra precautions when integrating similar models into their products. Users should keep in mind the limitations and potential dangers of these tools and not rely completely on their outputs.
December 2022
Ethical AI refers to the safe and responsible use of artificial intelligence (AI). It involves three main approaches: principles, processes, and ethical consciousness. Ethical AI operationalizes AI Ethics, with a focus on four key verticals: safety, privacy, fairness, and transparency. Algorithm auditing is a key practice for determining how well a system performs on each of these verticals. While AI has many applications, such as conversational AI, ethics need to be prioritized to prevent poorly designed systems from being developed. The EU High-Level Expert Group on AI and the IEEE have formulated moral values that should be adhered to in the design and deployment of artificial intelligence. However, regulatory oversight and AI auditing are needed to bridge AI ethics from theory to practice.