Menu
Alien Papers
About
Contact
Content
Verticals
Science
Health
Art
Other verticals
Whitepaper

Decoding the legal status of GenAI models and the emerging legal responses

Decoding the legal status of GenAI models and the emerging legal responses
Decoding the legal status of GenAI models and the emerging legal responses
Scroll for more
Scroll for more

In the previous articles, we examined the nuances of copyright protection for AI-generated content and the intricate matters surrounding potential copyright infringement, including training datasets used in generative AI models.

The distinction between the software code that is used to train the model, and the resulting weights that make up the trained AI model is crucial for understanding the protection of GenAI models under copyright laws. While software is protected under copyright law as an original work of authorship, it is unclear whether the model itself qualifies for copyright protection. Ultimately, whether GenAI models should be protected under copyright law depends on the specific characteristics of these models. In most cases, conventional copyright laws may not directly apply to the structure of neural networks, because they lack the required originality to be eligible for copyright protection.

Yet, the substantial investment of time, resources, and expertise required to produce these neural networks may require considering alternative legal mechanisms, such as the sui generis rights granted to databases in Europe. The sui generis database rights grant protection against unauthorized use and extraction of data from a database, provided that the creation thereof required a substantial investment. When applied to GenAI models, claiming sui generis rights would thus require assessing the different typologies of models (e.g. transformers, adapters, classifiers) and the associated techniques (e.g. full parameter finetuning, Low-Rank Adaptation, Direct Preference Optimization) to demonstrate the substantial investment incurred in both the creation and the design or structuring of these models.

This need for tailored legal recognition reflects a broader historical pattern, where technological advances often prompt a reevaluation of copyright law’s scope to accommodate or adapt to emerging technologies.

Such evolution has been evident in areas such as software, which has been held to qualify for copyright protection, and databases, where sui generis rights were established to address gaps left by traditional copyright. Similar patterns emerged with the advent of photography, movies, designs, electronic circuits, and other novel forms of creative expression. Likewise, today, the development of GenAI presents new opportunities for value creation and demands for new legal protections to promote innovation and investment in the field of GenAI.

The European Union has recently reached an agreement on the world’s first AI Act, designed to address the complex challenges and ethical considerations posed by AI technologies, including GenAI. This Act represents a significant step in establishing a comprehensive legal framework to oversee the development and use of AI systems, ensuring they are safe, transparent, and respectful of fundamental rights and existing legal framework, including intellectual property.

The AI Act will impose several constraints on AI systems providers, not only with regard to the extent to which their GenAI models can generate illicit or harmful content, but also with regard to the obligation to ensure the detection and tracing of AI-generated content. This means that any synthetic audio, video, text, image or other AI-generated content will have to be marked in a machine-readable format as being artificially generated or manipulated (e.g. through the use of watermarking techniques). Moreover, foundation models must meet transparency obligations before market entry. This includes putting in place a policy to comply with the Union copyright law, and making publicly available a detailed summary about the content used for AI training, aimed at helping rights holders to determine if their rights were used.

In China, a new law effective from August 2023 imposes restrictions on the training data used and the outputs produced by public-facing GenAI models. The law mandates that the provision of generative artificial intelligence services must respect intellectual property rights, protect trade secrets, and avoid creating monopolies and engaging in unfair competition. Specifically, GenAI providers must ensure the use of data and models from lawful sources, refrain from infringing others’ intellectual property rights, secure consent for using personal information, and take effective measures to enhance the training data’s quality (ensuring authenticity, accuracy, objectivity, and diversity).

Conversely, Japan has opted for a more permissive approach, choosing not to enforce copyright law on datasets used to train generative AI models. Japan’s GenAI policy allows AI models to process any data “whether for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise.”

In the US, although no federal legislation has been passed yet with regard to AI, the White House issued an executive order in October 2023 setting out key principles and actions aimed at ensuring the safe development and usage of AI. At a state level, legislative acts are emerging as a response to the legal challenges presented by GenAI, particularly in states like Florida, New York and Tennessee. The latter, in late March 2024, became the first US state introducing a law prohibiting the use of AI to replicate an individual’s voice without permission. These provisions came just one week before OpenAI announced the release of Voice Engine, a tool capable of cloning human voices from short audio samples. In Florida, upcoming regulations will require political campaigns to disclose the use of AI in any “images, video, audio, text and other digital content” used in ads, and will allow individuals to have a recourse if AI generated content displays them in a false light.

The UK has delayed any regulatory intervention regarding AI, embracing a “pro-innovation” approach, while nonetheless acknowledging that legislation will be needed at some point in the future, particularly with regard to General Purpose AI Systems (GPAI). Last February, the UK government called for the adoption of voluntary principles by AI providers, with regard to safety, security, robustness, transparency, explainability, fairness, accountability, governance, contestability, and redress. While tech companies like Google and Microsoft welcomed this move, others called for binding legal requirements, considering voluntary commitments from key AI companies insufficient.

This article, the fourth in our series, has outlined the legal status of Generative AI models and the evolving legal responses. Our forthcoming final article will propose specific private order solutions to effectively navigate these challenges, aiming to balance innovation with intellectual property rights in the GenAI landscape.

6 min read
by Primavera De Filippi
Share this post on :
Copy Link
X
Linkedin
Newsletter subscription
Related Papers
Let’s build what’s next, together.
Let’s build what’s next, together.
Let’s build what’s next, together.
Close