+

Issues and challenges involved in ownership of copyrighted content in Artificial Intelligence

In the last few years, the world has witnessed the generation of creative works by artificial intelligence (AI). The development of artificial intelligence towards technologies capable of autonomous creation brings to the fore several interesting yet muddled copyright questions. The questions include whether a man-made machine, or intelligent agent, may be regarded as an “author” […]

In the last few years, the world has witnessed the generation of creative works by artificial intelligence (AI). The development of artificial intelligence towards technologies capable of autonomous creation brings to the fore several interesting yet muddled copyright questions. The questions include whether a man-made machine, or intelligent agent, may be regarded as an “author” in the eyes of copyright law. This question has already sparked debates and differing views.
Closely associated with the authorship issue, other issues relating to the duration of copyright in the works and authors’ moral rights inevitably arise.
Generative AI has had a very good year. Corporations like Microsoft, Adobe, and GitHub are integrating the tech into their products; startups are raising hundreds of millions to compete with them; and the software even has cultural clout, with text-to-image AI models spawning countless memes. But listen in on any industry discussion about generative AI, and you’ll hear, in the background, a question whispered by advocates and critics alike in increasingly concerned tones: is any of this actually legal?
The question arises because of the way generative AI systems are trained. Like most machine learning software, they work by identifying and replicating patterns in data. But because these programs are used to generate code, text, music, and art, that data is itself created by humans, scraped from the web and copyright protected in one way or another.
For AI researchers in the far-flung misty past (aka the 2010s), this wasn’t much of an issue. At the time, state-of-the-art models were only capable of generating blurry, fingernail-sized black and while images of faces. This wasn’t an obvious threat to humans. But in the year 2022, when a lone amateur can use software like Stable Diffusion to copy an artist’s style in a matter of hours or when companies are selling AI-generated prints and social media filters that are explicit knock-offs of living designers, questions of legality and ethics have become much more pressing.
Take the case of Hollie Mengert, a Disney illustrator who found that her art style had been cloned as an AI experiment by a mechanical engineering student in Canada. The student downloaded 32 of Mengert’s pieces and took a few hours to train a machine learning model that could reproduce her style.
As Mengert told technologist Andy Baio, who reported the case: “For me, personally, it feels like someone’s taking work that I’ve done, you know, things that I’ve learned — I’ve been a working artist since I graduated art school in 2011 — and is using it to create art that that [sic] I didn’t consent to and didn’t give permission for.”
But is that fair? And can Mengert do anything about it? Some said with confidence that these systems were certainly capable of infringing copyright and could face serious legal challenges in the near future. Others suggested, equally confident, that the opposite was true: that everything currently happening in the field of generative AI is legally above board and any lawsuits are doomed to fail.
One of the major challenges in Indian copyright law due to AI is the issue of ownership of the copyright. Since AI systems and algorithms can generate original works without human intervention, it raises the question of who owns the copyright for those works. For instance, if an AI system generates a painting or a piece of music or a piece of literature does the copyright belong to the developer of the AI system or the user of the AI system or the AI system itself? This issue is further complicated by the fact that copyright law grants exclusive rights to the copyright owner, including the right to reproduce distribute and display the copyrighted work. It is unclear how these rights can be exercised if the ownership is in question or if the copyright is held by an AI system that is incapable of exercising those rights. Another challenge is the issue of fair use and infringement. Fair use allows for limited use of copyrighted works for purposes such as commentary criticism news reporting teaching scholarship or research. However, if an AI system reproduces a significant portion of a copyrighted work for its own purposes, it may run afoul of copyright infringement laws.
Moreover, the use of AI in enforcing copyright law has also caused concern about privacy issues as automated algorithms may lead to false positives and infringe on individuals’ rights to autonomy and freedom of expression.
While AI-generated content may not be intentional or malicious in nature, it can still violate copyright laws if it closely resembles existing works. This raises questions about who should be held liable when AI systems generate infringing content.
Currently, the legal framework for AI-generated content is still developing. Some experts argue that AI should be considered a legal entity with its own liability, while others advocate for holding the developers or users of AI systems responsible for any infringement.
One possible solution currently being explored is the use of licences or digital fingerprints to ensure that AI-generated content is original or properly attributed. This would provide a way to track and identify AI-generated content much like how authors and artists are identified through their copyright ownership.
However, this approach may not be feasible for all types of AI-generated content and may require significant resources to implement. As such there is still a need for further research and dialogue to develop a comprehensive approach to addressing liability issues related to AI-generated content. In addition to legal and technical solutions there is also a need for ethical considerations in the development and use of AI systems. Organizations must take responsibility for ensuring that their AI systems are designed and trained with respect to copyright laws and that they are transparent about the potential risks associated with AI-generated content.
Overall addressing liability issues related to AI-generated content is a complex and multi-faceted problem that will require collaboration and innovation from a variety of stakeholders, including policymakers, industry leaders and legal experts.
Truly, AI generated works are copyright nightmare for authorities throughout the world, as till now we have reached no common ground regarding this issue. Weak AI like Siri and Alexa have already been adopted by common public and industry specific AI are already being used to generate works in music, journalism, films, and gaming. Use of AI by the artists and common public is only going to be more widespread than ever before as AI continue to become more powerful, sophisticated and independent thus blurring the line of distinction between the works produced by human and AI. As for the Indian position regarding copyright, in order to make AI more inclusive in Indian Copyright law, the legislature should come up AI specific copyright law and the first step could be by recognizing the legal status AI i.e. by granting AI a legal personality, thus making it capable of holding its rights. In conclusion, the need of hour is that all the countries should come together to reach a common ground on who should get copyright of an AI generated work and countries on domestic level must start making AI specific copyright laws to prevent future complications as it is going to be even more complex to prevent misuse of this vacant law.

Dr S. Krishnan is an Associate Professor in Seedling School of Law and Governance, Jaipur National University, Jaipur. Gunjan Agarwal is Senior Associate at GC Garg and Associates, Jaipur.

Tags: