July 2024

Conformity Assessments in the EU AI Act: What You Need to Know

The EU AI Act introduces a risk-based regulatory framework for AI governance and mandates conformity assessments for high-risk AI systems. Providers may choose between internal or external assessment, but external assessment is mandatory under certain conditions. Conformity assessments must be combined with other obligations, such as issuance of a certificate, declaration of conformity, CE marking, and registration in the EU database. If a high-risk AI system becomes non-compliant after marketing, corrective actions must be taken. Delegated acts may be introduced by the Commission for conformity assessments. Holistic AI can help enterprises adapt and comply with AI regulation.

Navigating the Intersection of AI and Patent Law: The Global Debate on AI Inventorship

Patenting AI systems presents unique challenges, including the question of whether an AI can be considered an inventor under patent law. The United States Patent and Trademark Office (USPTO) has issued guidance stating that only natural persons can be named as inventors on US patents and patent applications. For an AI-assisted invention to qualify for patent protection, a natural person must have made a significant contribution to its conception beyond mere operation or utilization of AI technology. The guidance underscores the importance of genuine inventive involvement in patentable innovations. The UK Supreme Court and other patent offices have similarly upheld the necessity of human inventorship. However, South Africa has granted a patent where an AI system is the inventor, raising questions about the future of patent laws and the need for legislative updates to accommodate AI inventorship. The evolving nature of AI technology necessitates a reevaluation of patent laws to address the unique challenges posed by AI-generated inventions.

June 2024

Artificial Intelligence in Syndicated Lending

The adoption of artificial intelligence (AI) in financial services, particularly in syndicated lending, can offer benefits such as reducing costs, improving documentation production and completion, and improving risk management. However, lenders and other market participants adopting AI must consider the regulatory landscape across different jurisdictions, which can vary in their approaches to regulating AI. To operationalize the use of AI in a compliant manner, institutions must develop internal guidelines encompassing AI governance policy, AI model lifecycle management policy, policy on compliant and ethical use of AI and risk identification and management policy. While concerns exist about the impact of AI on jobs, its adoption in syndicated lending can ultimately help employees focus on higher-value tasks.

AI Alignment: Risks, Approaches, Challenges and Benefits

Artificial general intelligence (AGI) has the potential to revolutionize science and technology, but responsible management is crucial to ensure that it aligns with human values and does not harm human interests. AI alignment focuses on the internal workings of AI systems, while AI governance focuses on the broader regulatory and policy framework governing AI’s integration into society. Misalignment can bring various risks, including safety, ethical, economic, employment, and security risks. Several approaches to AI alignment address different aspects of the problem, with varying degrees of effectiveness and scalability. The major challenge of AI alignment is scalable alignment, which requires evolving methodologies to manage increasingly complex and capable AI systems while preserving human control and promoting their utilization for societal benefit.

Regulating AI in the Asia-Pacific Region

Regulatory frameworks for AI technology are rapidly developing in the Asia-Pacific (APAC) region, with the market projected to grow significantly, reaching $356.13 billion by 2029. Many countries in the APAC region have introduced guidelines and laws to govern AI. These include legislative efforts to regulate AI use in the Philippines, Thailand, and South Korea. Additionally, many countries have adopted voluntary principles and guidelines to promote ethical AI use, such as Japan's Governance Guidelines for AI and Australia's AI Ethics Principles. Across the region, a shared commitment to responsible AI use is evident, whether through binding legislation or non-binding guidelines. Compliance with these regulations can be challenging, and businesses are encouraged to explore solutions like Holistic AI to navigate the rapidly evolving regulatory landscape.