A partial amendment to the Libraries Act has been passed by the National Assembly, addressing concerns regarding the influx of “one-click publications” generated by artificial intelligence. This legislation excludes such AI-produced works from the legal deposit requirements, thereby preventing potential budget waste and fraudulent compensation claims. The amendment grants the director of the National Library of Korea the authority to refuse legal deposits of AI publications, following review, and establishes a framework for recovering compensation obtained deceitfully.
Read the original article here
Korea’s recent decision to block AI-generated books from library deposits marks a significant moment in the ongoing conversation about artificial intelligence and its place in our cultural landscape. As a librarian, I’ve seen firsthand the growing deluge of AI-created content, and this policy resonates deeply with the challenges we face. It’s not just about AI-generated books themselves, but also the proliferation of requests for books that, as it turns out, don’t actually exist, having been conjured up by AI like ChatGPT and then recommended to unsuspecting patrons. It feels like a digital mirage, offering the illusion of knowledge without substance.
The core of this issue, as I see it, is about preserving the integrity and value of human creation. Libraries have always been sanctuaries for genuine human thought and expression. Allowing AI-generated content to mingle unchecked with these curated collections risks diluting that essence. It feels like a cheap plastic knockoff of creativity, and much like plastic, its long-term ripple effects could prove detrimental to our intellectual ecosystem. Why would we even want to fill our libraries with books that someone couldn’t be bothered to write with genuine human effort and inspiration?
The sentiment that we don’t want to read “slop that someone couldn’t be bothered to write” is a powerful one. When you compare it to the value produced by a ten-year-old engaged in creative writing, it highlights a stark difference. AI-generated content, in its current form, often lacks the depth, originality, and emotional resonance that comes from human experience. It’s understandable to question who would choose to read a generated book when there are countless authentic human stories waiting to be discovered. The very act of creation, the struggle and the triumph, is embedded in a human-authored book in a way that an AI simply cannot replicate.
There’s also the practical concern about the sheer volume of AI-generated content flooding online marketplaces. Platforms like Amazon are becoming inundated with these books, particularly in children’s literature and fiction. This creates an environment where it’s easy for creators to churn out thousands of titles, potentially making a quick buck without genuine literary merit. The sheer volume can overwhelm legitimate works, making it harder for human authors to gain visibility and for readers to find quality content. It’s a “sloppified slopness” that devalues the entire publishing industry.
However, it’s important to acknowledge the nuances. While the primary concern is the proliferation of low-quality, inauthentic content, AI also has its uses. As a librarian, I can appreciate how AI tools can sometimes assist in patron queries, like helping to identify a book based on fragmented memories of its plot, time period, and even color. This is a helpful application, demonstrating that AI isn’t inherently bad, but its output needs careful consideration and context. The danger arises when its capabilities are exploited to bypass the fundamental process of human creation.
The argument that generating books with AI is a simple solution to a patron’s request for a book they can’t quite recall is a double-edged sword. While it might offer an immediate answer, it bypasses the genuine discovery and the richness that comes from librarians using their expertise and vast knowledge to find existing works. It also sidesteps the question of whether a book that takes less time to create than it does to experience holds the same intrinsic value. This is where the distinction between a quick fix and true literary contribution becomes critical.
The notion that AI-generated books are akin to a “cheap plastic knockoff” of creativity is a compelling analogy. It suggests a superficial imitation rather than genuine innovation. While some might argue that this is a resistance to progress, there are instances where holding firm against a technological shift is the more sensible path. The current wave of AI-generated content feels like a technological shift that, for once, makes the “uptight, angry old scholar” archetype the reasonable one. It forces us to re-evaluate what we value in art and literature.
The comparison to Tetris, a game coded in a few hundred hours but played for thousands, raises an interesting point about the value of experience versus creation time. However, the crucial difference lies in the intent and the source of creativity. Tetris emerged from human ingenuity and provided a unique, engaging experience that continues to resonate. AI-generated books, in their current form, often lack that spark of human experience and original intent. They are a product of algorithms, not of lived emotions and unique perspectives. The question of whether experience changes after 100 hours is valid, but it doesn’t negate the fundamental question of *who* or *what* is creating the content being experienced.
Ultimately, Korea’s decision to block AI-generated books from library deposits is a necessary step in protecting the integrity of our literary heritage. It’s a recognition that libraries should be spaces for authentic human expression, not repositories for algorithmically produced content that prioritizes quantity over quality. It’s a statement that while AI has its place, the realm of books, with their power to connect us to human experience, deserves a higher standard.
