Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit filed by authors who alleged the company used pirated copies of their works to train its AI chatbot, Claude. The settlement, which could be approved as early as Monday, covers approximately 500,000 books, with authors or publishers receiving around $3,000 per book. A federal judge previously found that while training AI on copyrighted books wasn’t illegal, Anthropic had wrongfully acquired millions of books through pirate websites. This landmark settlement sends a message to the AI industry regarding the consequences of using authors’ works to train AI.
Read the original article here
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and this is a pretty significant sum, isn’t it? It’s a massive financial commitment to resolve a class-action lawsuit brought by authors who claimed Anthropic used pirated copies of their books to train its AI chatbot. This settlement could be a major turning point in the legal battles between AI companies and the creative professionals who are raising concerns about copyright infringement.
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and the company will be doling out about $3,000 for each of the estimated 500,000 books covered by the settlement. That’s potentially the largest copyright recovery ever, and a first of its kind in the AI era, according to one of the lawyers representing the authors. But it does make you wonder, is $3,000 enough?
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and it’s a good question to consider whether this settlement could set a precedent for how future copyright lawsuits against AI companies are decided. While this specific case doesn’t establish a legal precedent in the traditional sense due to it being a settlement, it’s still hugely important. The case hinged on how Anthropic obtained the books, not necessarily what they were used for. The company got them from pirated sources, period. The use of those books as AI training data was almost beside the point.
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and let’s delve into the implications of this settlement. Because, at its core, the case focused on the illegal acquisition of copyrighted material, regardless of its subsequent use in AI training. This situation doesn’t answer the question of whether it’s okay to use copyrighted works for AI training if those works are acquired legally, such as by buying them at a retail price.
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and you have to ask yourself, what about the future? The idea of a fine being viewed as just the cost of doing business is concerning. Is $1.5 billion enough of a deterrent? If the pirating had never occurred, and the works were bought, would that have solved the issues?
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and the core issue boils down to copyright infringement. And, because there’s an implicit copyright on published work, the settlement is tied to works registered with the U.S. Copyright Office. One thing to consider is that the authors are receiving $3,000 a book, which raises the question of whether this is an adequate compensation, especially considering the vast amounts of money generated by the technology trained on these works.
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and this arrangement doesn’t seem to guarantee a win for authors. They don’t ‘get the rights’ here. This isn’t about whether the company gets to continue using the data; it’s about resolving the fact that the training data was stolen.
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and there’s a key distinction here: AI training itself was previously ruled as fair use, but the method by which the training data was obtained – pirating – is what led to this settlement. It’s fair use to train, but not to steal.
Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material, and this agreement is, in part, due to a previous ruling which deemed AI training as fair use, but using pirated material as the training data resulted in the loss. If Anthropic had just purchased the books used for training in the first place, they wouldn’t have had a problem.
