Authors File a Lawsuit against OpenAI for Unlawfully ‘Ingesting’ Their Books
ARTIFICIAL INTELLIGENCE-AI, 10 Jul 2023
Ella Creamer | The Guardian - TRANSCEND Media Service
Mona Awad and Paul Tremblay allege that their books, which are copyrighted, were ‘used to train’ ChatGPT because the Chatbot generated ‘very accurate summaries’ of the works.
5 Jul 2023 – Two authors have filed a lawsuit against OpenAI, the company behind the artificial intelligence tool ChatGPT, claiming that the organisation breached copyright law by “training” its model on novels without the permission of authors.
Mona Awad, whose books include Bunny and 13 Ways of Looking at a Fat Girl, and Paul Tremblay, author of The Cabin at the End of the World, filed the class action complaint to a San Francisco federal court last week.
ChatGPT allows users to ask questions and type commands into a chatbot and responds with text that resembles human language patterns. The model underlying ChatGPT is trained with data that is publicly available on the internet.
Yet, Awad and Tremblay believe their books, which are copyrighted, were unlawfully “ingested” and “used to train” ChatGPT because the chatbot generated “very accurate summaries” of the novels, according to the complaint. Sample summaries are included in the lawsuit as exhibits.
This is the first lawsuit against ChatGPT that concerns copyright, according to Andres Guadamuz, a reader in intellectual property law at the University of Sussex. The lawsuit will explore the uncertain “borders of the legality” of actions within the generative AI space, he adds.
Books are ideal for training large language models because they tend to contain “high-quality, well-edited, long-form prose,” said the authors’ lawyers, Joseph Saveri and Matthew Butterick, in an email to the Guardian. “It’s the gold standard of idea storage for our species.”
The complaint said that OpenAI “unfairly” profits from “stolen writing and ideas” and calls for monetary damages on behalf of all US-based authors whose works were allegedly used to train ChatGPT. Though authors with copyrighted works have “great legal protection”, said Saveri and Butterick, they are confronting companies “like OpenAI who behave as if these laws don’t apply to them”.
However, it may be difficult to prove that authors have suffered financial losses specifically because of ChatGPT being trained on copyrighted material, even if the latter turned out to be true. ChatGPT may work “exactly the same” if it had not ingested the books, said Guadamuz, because it is trained on a wealth of internet information that includes, for example, internet users discussing the books.
OpenAI has become “increasingly secretive” about its training data, said Saveri and Butterick. In papers released alongside early iterations of ChatGPT, OpenAI gave some clues as to the size of the “internet-based books corpora” it used as training material, which it called only “Books2”. The lawyers deduce that the size of this dataset – estimated to contain 294,000 titles – means the books could only be drawn from shadow libraries such as Library Genesis (LibGen) and Z-Library, through which books can be secured in bulk via torrent systems.
This case will “likely rest on whether courts view the use of copyright material in this way as ‘fair use’”, said Lilian Edwards, professor of law, innovation and society at Newcastle University, “or as simple unauthorised copying.” Edwards and Guadamuz both emphasise that a similar lawsuit brought in the UK would not be decided in the same way, because the UK does not have the same “fair use” defence.
The UK government has been “keen on promoting an exception to copyright that would allow free use of copyright material for text and data mining, even for commercial purposes,” said Edwards, but the reform was “spiked” after authors, publishers and the music industry were “appalled”.
Since ChatGPT was launched in November 2022, the publishing industry has been in discussion over how to protect authors from the potential harms of AI technology. Last month, The Society of Authors (SoA) published a list of “practical steps for members” to “safeguard” themselves and their work. Yesterday, the SoA’s chief executive, Nicola Solomon told the trade magazine the Bookseller that the organisation was “very pleased” to see authors suing OpenAI, having “long been concerned” about the “wholesale copying” of authors’ work to train large language models.
Richard Combes, head of rights and licensing at the Authors’ Licensing and Collecting Society (ALCS), said that current regulation around AI is “fragmented, inconsistent across different jurisdictions and struggling to keep pace with technological developments”. He encouraged policymakers to consult principles that the ALCS has drawn up which “protect the true value that human authorship brings to our lives and, notably in the case of the UK, our economy and international identity”.
Saveri and Butterick believe that AI will eventually resemble “what happened with digital music and TV and movies” and comply with copyright law. “They will be based on licensed data, with the sources disclosed.”
The lawyers also noted it is “ironic” that “so-called ‘artificial intelligence’” tools rely on data made by humans. “Their systems depend entirely on human creativity. If they bankrupt human creators, they will soon bankrupt themselves.”
OpenAI were approached for comment.
___________________________________________
Ella Creamer is a freelance politics and culture journalist.
Go to Original – theguardian.com
Tags: Artificial Intelligence AI, Books, ChatGPT, Chatbot, Justice
DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.
Read more
Click here to go to the current weekly digest or pick another article:
ARTIFICIAL INTELLIGENCE-AI: