Who Owns AI Creativity? A Legal Analysis of Copyright and Liability in India

Posted On - 16 February, 2026 • By - Shambhavi Sharma

Introduction

Can a machine commit copyright infringement? What once seemed like a speculative question confined to science fiction is now a pressing legal reality. With the rapid rise of generative AI systems such as Midjourney, ChatGPT and DALL·E, the boundary between creative inspiration and unlawful imitation has grown increasingly blurred. These technologies can replicate distinctive artistic styles, generate text, music, and images within seconds, and do so without human consciousness, intent, or traditional skill.

India’s intellectual property framework, particularly under the Copyright Act, 1957, was conceived in an era where creativity was inseparable from human authorship. Judicial precedents such as Eastern Book Company v. D.B. Modak reinforce the requirement of a “modicum of creativity” rooted in human effort. Yet AI-generated works challenge this foundational assumption. When a machine produces content strikingly similar to a known artist’s style, difficult questions arise: Who owns the output? Who bears liability for infringement? And can originality exist without human agency?

These questions lie at the heart of the evolving global and distinctly Indian, legal debate on AI-generated works and the future of authorship, ownership, and accountability in the age of artificial intelligence.

The legal dilemma surrounding AI-generated content is no longer theoretical; it surfaces in routine, everyday use of generative tools. A simple prompt such as requesting a “Van Gogh-style” depiction of Mumbai streets can instantly produce imagery that is visually compelling, stylistically recognisable, and yet unmistakably derivative. This raises a fundamental question: can such output truly be regarded as “original,” and if so, who is its author, is it the human prompter or the machine?

Under Indian copyright jurisprudence, originality is anchored in the exercise of human skill, labour, and judgment and not in algorithmic computation or predictive modelling. The distinction between AI-assisted creation (where human input meaningfully shapes the final work) and fully autonomous AI generation becomes legally decisive. Where the machine operates with minimal creative intervention, the traditional threshold of authorship under the Copyright Act, 1957 is strained.

Further complexity arises from moral rights under Section 57, which protect the author’s right to integrity and attribution. While copyright law does not protect artistic “style” per se, the large-scale replication of a distinctive aesthetic by AI systems tests the boundaries between lawful imitation and ethical misappropriation. The law may permit stylistic emulation, but it sits uneasily with the spirit of artistic authenticity.

An even more contentious issue concerns the datasets used to train generative models. The ongoing proceedings in ANI Media Pvt. Ltd. v. OpenAI1 have brought this question into sharp focus, with the court examining whether large-scale, unlicensed data scraping for AI training violates India’s “fair dealing” framework, a doctrine considerably narrower than the U.S. fair use standard. The outcome of this litigation may significantly influence how Indian law reconciles technological innovation with copyright protection in the AI era.

Indian Laws and Their Loopholes

India’s intellectual property regime is conceptually robust, but unmistakably human-centric. The Copyright Act, 1957 protects original literary and artistic works premised on human authorship; the Patents Act, 1970 excludes abstract ideas and algorithms from patentability; the Trade Marks Act, 1999 safeguards commercial source identifiers; and the Designs Act, 2000 protects novel aesthetic features. Collectively, these statutes form a comprehensive framework but one designed for a world in which creativity and infringement are products of human agency.

What they do not anticipate is the liability vacuum created by non-sentient creators. Indian jurisprudence, including TheCoca-Cola Company v. Bisleri International Pvt. Ltd., traditionally anchors infringement in intention, misrepresentation, or at least identifiable human conduct. Generative AI disrupts this assumption. These systems can produce content that is substantially similar to protected works without intent, awareness, or even direct human direction.2

The result is a doctrinal paradox: AI-generated outputs may struggle to qualify for protection due to the absence of human authorship, yet the same outputs can infringe existing copyrighted material. In such cases, the law offers limited clarity on attribution of liability whether it should rest with the user who entered the prompt, the developer who designed the model, or the platform that deployed it. This absence of explicit statutory guidance creates regulatory uncertainty, weakens enforcement predictability, and opens the door to large-scale commercial exploitation without a clearly accountable actor.

A Fragmented Regulatory Picture

The regulatory environment for copyright, and AI-generated content, is fragmented in multiple regions. While the World Intellectual Property Organization (WIPO) has opened up the conversation about this issue, it has not mandated any specific rules but rather acknowledged that traditional human-based authorship may not work well in an environment dominated by AI-created works. The United States (US) is dedicated to a definition of authorship that includes only people, and has held up that there will be no copyright protection for works created solely by an AI as demonstrated by the case of Zarya of the Dawn. However, the concept of fair use does allow for the training of models through collecting publicly available information (as shown in Authors Guild v. Google Inc.3 In contrast, the European Union (EU) uses a structured oversight approach (in the form of the EU Artificial Intelligence Act, 2024 and EU Copyright Directive) which allows for research-based text and data mining and prohibits any commercial activity associated with extracting data or generating intellectual property. The UK has developed a unique position in its definition of the author of computer-generated works by defining the author in Section 9(3) of the Copyright, Designs and Patents Act, 1988 as the individual responsible for making the necessary arrangements to generate the work. Each of these approaches can be used to inform India’s regulatory framework, but none provides a complete solution for India’s unique social and legal circumstances.

Challenges in Applying Indian Law to Intellectual Property Rights in Relation to Artificial Intelligence

Indian IP law presumes authorship and infringement to be two distinctly human-calibrated actions. However, due to the nature of AI technology, this presumption may not be sustained with verifiable digital trail created by the AI systems or a formal statistical data set. The current way that laws are implemented to enforce copyright has been created and implemented with respect to physical reproductions of work; therefore, they do not account for ways AI systems create new data or output, which may be interpreted as having diverse referencing points and the inability to determine a direct relationship. The geography where the infringing AI model was created will also complicate principles of jurisdiction, as seen in Ani Media (P) Ltd., where there are concerns regarding the international applicability and enforcement of copyright law. The dynamic nature of algorithms makes the rigid categories created by legal principles inadequate, therefore necessitating modifications of the current laws, in order to accommodate the algorithmic creative process.

Conclusion

As AI increasingly shapes creative industries, India’s intellectual property framework must evolve beyond its purely human-centric foundations. Rather than conferring legal personality on AI, the law should clarify ownership and establish a structured hierarchy of liability among users, developers, and platforms. Greater transparency in training data, mandatory labelling of AI-generated content, and specialised adjudicatory mechanisms can strengthen accountability without stifling innovation. The objective is not to resist technological change, but to ensure that creativity, attribution, and responsibility remain legally coherent in the age of artificial intelligence.

  1. 2024 SCC OnLine Del 8120. ↩︎
  2. 2009 SCC OnLine Del 3275 ↩︎
  3. Authors Guild v. Google Inc., 2015 SCC OnLine US CA 2C 1 ↩︎