Background on Anthropic and the Lawsuit
Anthropic, a prominent artificial intelligence (AI) company, recently reached a settlement in a copyright lawsuit. The case revolves around the unauthorized use of approximately 500,000 books to train their AI model, Claude. This settlement aims to compensate authors whose works were used without permission.
Key Parties Involved
- Anthropic: A leading AI research and safety company, known for developing Claude, an advanced language model that competes with ChatGPT.
- Plaintiffs: Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who claimed Anthropic illegally copied their books to train Claude.
The Lawsuit and Court Rulings
In June, a U.S. federal court ruled partially in favor of Anthropic, stating that training AI models with books—whether purchased or pirated—constituted a “transformative use” under U.S. law.
Judge William Alsup wrote in his decision that the use of the books to train Claude was “extremely transformative” and thus a legitimate use, comparing the AI training process to how humans learn by reading books.
Court’s Finding on Pirated Books
However, the court determined that Anthropic’s practice of downloading millions of pirated books to create a digital library did violate copyright laws.
Aparna Sridhar, Anthropic’s General Counsel, stated that the court’s historical June ruling establishing AI training as a transformative fair use remains intact.
Settlement Details
The settlement covers around 500,000 books, equating to approximately $3,000 per work. This compensation is four times the minimum legal damages for copyright infringement under U.S. law.
Reactions and Future Implications
“We are pleased that the court has given preliminary approval to the settlement,” said Aparna Sridhar, Anthropic’s Deputy General Counsel, in response to an AFP query.
“This decision will allow us to focus on developing safe AI systems,” she added.
Key Questions and Answers
- What is the lawsuit about? The authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic for allegedly using their books without permission to train its AI model, Claude.
- What was the court’s ruling? The court ruled that training AI models with books constitutes a transformative use under U.S. copyright law, but acknowledged that downloading pirated books for AI training violates copyright.
- What does the settlement entail? Anthropic will compensate the affected authors approximately $3,000 per work for around 500,000 books used in training Claude.