New York Times sues Perplexity AI for ‘illegal’ copying of content, and this feels like a significant moment in the ongoing debate about AI and copyright. The core of the issue is pretty straightforward: The New York Times is accusing Perplexity AI of essentially lifting their content, repackaging it, and sometimes even making stuff up while attributing it to the NYT. It’s like a digital version of plagiarism, but on a massive, automated scale.
The crux of the matter seems to be that Perplexity is not just quoting and citing, which is generally permissible, but rather, is actively reproducing and re-presenting the NYT’s articles. This is a point of contention and the lawsuit seems to be centered around this very act. If the NYT wins, the implications could be huge, potentially forcing AI companies to get licenses for the content they use or drastically change how they gather information. This could be a defining moment that could set legal precedents that either allows or outright denies AI to use copyrighted content without permission.
A common sentiment I’m picking up is the feeling that this isn’t just about the NYT’s rights; it’s about the future of copyright in the age of AI. Some argue that if media companies can’t win these AI lawsuits, it could effectively cripple copyright enforcement, potentially leading to a more widespread acceptance of digital piracy. There’s a concern that AI companies, backed by venture capital, are essentially “hoovering up” massive amounts of content, which is a very different thing from, say, someone downloading a movie that’s no longer readily available.
The financial aspect is also a significant factor. If the NYT wins, it could create another funding stream which could be allocated to paying their writers more or hiring more journalists. This is a positive thing for those writers, even if it leads to some level of discontent from the existing NYT writers who would presumably like to get paid more. Of course, there’s always the possibility that the company would just pocket the extra money.
One of the issues is that AI, like Perplexity, has a tendency to “hallucinate.” This means that the AI invents information and then presents it as fact, sometimes attributing it to sources like the NYT. This is a key part of the lawsuit: the AI not only copies, but also fabricates and misrepresents the NYT’s work. It’s essentially “AI slop,” as one commenter put it.
The argument that AI is just “indexing” or “quoting” content just doesn’t quite fly, especially when it is reprinting the content, repackaging, and re-presenting the NYT’s articles. Perplexity, being sued by multiple sources, might lose its competitive edge, potentially cratering their valuation. There are several angles that the NYT has that they might want to use to seek compensation.
This lawsuit really highlights the disparity between the actions of a billionaire-backed tech company and an individual’s actions. Using copyrighted content without permission to train AI models is quite a stark contrast to someone saving an image off Google. The sentiment is that it isn’t derivative work. If one were to sell t-shirts based on an artist’s style, they would be immediately sued. So why are AI companies allowed to do this with all this content?
The expectation is that these kinds of lawsuits are usually settled out of court. However, some hope that the NYT will push for legal precedent that clearly defines the boundaries of content usage in AI. If the NYT wins, it could force AI companies to drastically change how they source information. This is a big deal for content creators and the people who depend on that content to do their jobs.