AI and intellectual property law face a critical juncture as AI and its related generative technologies rapidly innovate creative output, challenging conventional concepts of intellectual property (IP) ownership. Traditional IP laws were designed to protect human creativity, but they struggle with AI-generated content, raising complex questions of authorship, ownership, and responsibility.
Aspidistra spoke with Nora Haapala, Attorney-at-Law and Associate Partner at Rödl & Partner, and a specialist in Intellectual Property and IT Law, about how IP regulations and norms are keeping up with a transforming creative landscape.
Follow us as we delve into IP’s evolving role in the AI era and how to address key issues and misconceptions, while crafting strategies that strengthen innovation and protect creators.

Nora Haapala is an Attorney-at-Law and Associate Partner at Rödl & Partner, a global advisory company.
How has the definition of IP changed in the context of AI and generative technologies?
The definition of intellectual property has not fundamentally changed, but its application and interpretation have become significantly more complex. Traditionally, IP laws were designed to protect human-created works by granting exclusive rights to creators. However, the rise of generative AI, which can produce outputs that mimic human creativity, has challenged these frameworks.
What are the most pressing generative AI and intellectual property law challenges today?
Generative AI brings several IP challenges that current regulation is not yet prepared for. Traditional IP regulation assumes that the actor is a human, but this assumption does not hold when AI is involved. This raises questions such as who is considered the author of a work and who owns the rights when content is created with the help of AI.
One of the most pressing IP issues posed by generative AI is the use of scraped data for model training without clear consent or licensing. This practice raises legal concerns around copyright infringement, database rights and moral rights, especially when AI systems reproduce or mimic protected content.
What are the most common misunderstandings or misconceptions about IP ownership and responsibility when it comes to AI-generated content?
One of the most common misconceptions is that copyright automatically applies to all AI-generated content. In practice, copyright traditionally arises for a natural person when the work meets the originality threshold. Content generated entirely by a machine does not qualify for copyright protection.
Another misunderstanding is assuming that any human involvement guarantees ownership. While minimal input, such as a simple prompt, is unlikely to create rights, significant human contribution, such as crafting a highly detailed prompt or extensively editing the output, may lead to copyright protection, at least partially. In such cases, the AI is treated as a tool for realising the human creator’s vision.
Who should be considered the legal creator or owner of AI-generated content, and how should IP law adapt to reflect this?
Current IP frameworks, particularly copyright law, are based on a human-centred concept of authorship that demonstrates originality and reflects the creative direction of the author. Both legislation and case law assume that only a natural person can be recognised as the author. The same principle applies to inventions. Under current Finnish and EU copyright law, only human creators can be considered authors, meaning that fully AI-generated works are not eligible for copyright protection.
Generative AI challenges these foundations because its outputs often lack direct human authorship in the traditional sense. If no meaningful human creative input exists, such works may fall outside the scope of existing copyright protection. While some ownership issues can be addressed through contractual arrangements between developers, users and organisations, the current legal framework does not provide a comprehensive solution.
To address these gaps, IP law may need to evolve in several ways:
- First, it should clarify the threshold of human involvement by defining what constitutes sufficient creative input for authorship when AI tools are used.
- Second, lawmakers could consider introducing new legal categories, such as sui generis (or unique) rights for AI-generated works to protect economically valuable outputs that lack human authorship but involve substantial investment.
- Third, contractual frameworks should be strengthened to ensure clear agreements on ownership and liability between AI developers, users and businesses.
- Finally, transparency obligations should be implemented to mitigate infringement risks, particularly when AI models are trained on copyright-protected material without authorisation.
How should copyright law address AI models trained on data that includes copyrighted works without explicit permission?
Training AI models often involves copying large volumes of data, which may include protected works, raising concerns about potential infringement of authors’ exclusive rights to reproduce and make their works available.
EU law already offers a framework through the Directive on Copyright in the Digital Single Market[i], which allows text and data mining of lawfully accessible works unless the rights holder has expressly opted out. This exception is widely understood to cover machine learning processes, meaning that training on copyrighted material is generally lawful if these conditions are met.
To ensure a balance between innovation and the protection of creators’ rights, copyright law should maintain the opt-out mechanism for rights holders, impose transparency obligations on developers to disclose the sources of training data, and clarify the scope of permissible use under text and data mining exceptions, especially for commercial AI development. At the same time, it should prohibit outputs that reproduce protected works verbatim or in a substantially similar form, as lawful training does not justify generating infringing copies.
If an AI system produces content that infringes on someone’s IP, who should be held legally responsible — the developer, the user, or another party?
Determining legal responsibility when AI-generated content infringes intellectual property is complex because current copyright law was designed for a world where humans are the creators. Copyright protection only arises for works created by a natural person, and AI itself cannot hold rights or obligations. Therefore, liability cannot be assigned to the AI system.
Responsibility will likely fall on human actors involved in the process. If a user employs AI as a tool and exercises creative control, such as providing detailed prompts or editing outputs, they may be considered the author and thus bear responsibility for infringement. Developers, on the other hand, could face liability if the infringement results from how the system was trained or if they failed to implement safeguards against unlawful use.
Ultimately, the allocation of responsibility depends on the degree of human involvement and contractual arrangements, but under current law, responsibility rests with people, not machines.
Which countries or regions are leading in creating AI-specific intellectual property laws or guidelines?
The EU is at the forefront of AI regulation globally, particularly with its Artificial Intelligence Act[ii], which came into force in August 2024. While not an IP law per se, the Act includes explicit requirements related to copyright compliance, especially for general-purpose AI systems.
Developers of general-purpose AI systems must publish a summary of the training datasets used and ensure that training data complies with EU copyright law, including opt-out mechanisms under the Copyright Directive. The EU’s broader digital strategy also integrates IP considerations through GDPR[iii], the Digital Services Act[iv] and the Data Act[v].
Finland is implementing the AI Act through national legislation and supervisory structures, including AI sandboxes coordinated by Traficom (Finnish Transport and Communications Agency). While no AI-specific IP law exists nationally, Finnish authorities are expected to enforce EU copyright rules in AI contexts.
The UK and Japan have also taken steps: the UK has issued guidance on copyright and AI-generated works, while Japan permits broad data mining for AI training. These approaches reflect differing priorities, but the EU currently offers the most comprehensive regulatory framework integrating IP into AI governance.
Should there be a unified global framework for AI and intellectual property law issues?
A unified global framework for managing IP issues related to AI would be necessary, as current national regulatory systems are fragmented and often outdated in relation to the rapid development of AI technologies. The data used to train AI systems is collected and utilised across borders, which creates legal uncertainties. Since AI systems operate globally, national regulations alone are insufficient.
According to the OECD[vi], one key solution to this fragmented and inadequate regulatory landscape would be an internationally coordinated, voluntary code of conduct. Such a code could include commonly agreed definitions for data collection methods, standardised contractual terms for data use, technical tools to protect rights holders and documentation practices that promote transparency.
How can companies design ethical AI practices that take into account intellectual property?
AI is a tool and should be treated like any other tool. The user is responsible for the tool’s actions. Even though the functioning of AI is not fully transparent and the user may do their best to guide it appropriately, the output should still be reviewed before it is used as a basis for decision-making or shared outside the organisation, whether in its original or modified form. Using AI is a deliberate choice that carries responsibility. Because its behaviour is difficult to control fully, the focus must remain on how and for what purposes it is used.
Companies can design ethical AI practices that respect intellectual property by ensuring that sensitive or proprietary materials are not exposed to unnecessary risks. Uploading work files or client-provided materials into an online AI tool can effectively amount to disclosing those materials to a third party without permission, which may violate confidentiality and IP obligations.
To mitigate these risks, organisations should always use paid API or professional versions of AI tools that explicitly prevent prompts and uploaded data from being used to train language models. Additionally, it is essential to review and configure the settings and terms of service of these tools to understand how prompt data is processed and ensure compliance with IP and data protection requirements.
For more legal insight, read:
A guide to how the AI Act will impact marketing communications
Main image: Pawel Czerwinski, Unsplash
About Nora Haapala:
Nora Haapala is an Attorney-at-Law and Associate Partner at Rödl & Partner, a global advisory company. Nora specialises in Intellectual Property and IT Law, with a particular focus on the legal challenges of AI and generative technologies. With 15 years of experience, she also advises on Corporate Law and Compliance, and is a Certified IP Scan Expert and Data Protection Officer.
References:
i Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market.
ii Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence.
iii Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
iv Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).
v Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act).
vi Intergovernmental Organisation for Economic Co-operation and Development.