The toaster. A humble device that has silently revolutionized our everyday lives. Every morning when the first sunlight floods their bedrooms, millions of people stumble into their kitchen half asleep, place a piece of bread into the magical appliance and wait for a crisp, perfectly brown slice to emerge and complement their peanut butter or jam. But who was the genius inventor behind this device, the person who permanently altered our breakfast landscape? The person who — according to absolutely no serious historian — has shaped the course of history forever? Do you happen to know? If not, don’t worry. Neither did major news outlets, the Scottish government, or a US museum. Or any person relying on the corresponding Wikipedia article, for that matter.
In 2012, a group of British students was bored during a lecture and came up with an interesting way of having some innocent fun: They edited the Wikipedia article about the electric toaster to claim that one of them, a man called Alan MacMasters, had invented the toaster back in 1893. To their surprise, the change was accepted.
Fueled by their success, they later created an entire, completely made-up Wikipedia page on MacMasters himself, including plenty of fake biographical details. This page featured an absolute masterpiece (or shall I say MacMasterpiece) of photo-editing, a black-and-white filtered version of a photograph they took of the student:
Edited photograph of Alan MacMasters, which used to be the main image for the now-deleted article on the supposed inventor of the toaster. In hindsight, it seems very curious it was thought to be genuine. (Source: Wikimedia)
The result? The article was online for more than a decade, a primary school came up with a celebration day for MacMasters, and numerous news articles cited him as the inventor of the toaster.
This is a striking example of how history can be created out of nothing: The students didn’t even put much effort into faking the photo because it was supposed to be a joke — but thanks to this lack of effort, a Reddit user got suspicious about the article and sparked an investigation which eventually lead to its deletion.
But obviously, such a trivial error couldn’t happen anymore in our technologically advanced age of ever-improving artificial intelligence — right? Let’s ask the large language model Claude to clear up this hoax once and for all:
Screenshot of the author showing a conversation with Claude 3.7 Sonnet on 18 March 2025.
Oh… This is bad. But at the same time, it’s not that surprising: After all, the hoax spread everywhere and was thus to be found many times in the data used for training the AI. A similar phenomenon is actually observed in humans, where it is called the illusory truth effect. This cognitive bias suggests that when we repeatedly hear the same wrong claim across different sources, we are more likely to believe it’s true.
Psychologists believe it happens because our brains feel more at ease processing information that we have encountered before, and this ease is then confused for truthfulness. Consider for example how urban legends like “we only use 10% of our brains” persist despite being thoroughly debunked by neuroscientists. The claim appears in films (it’s the entire premise of the 2014 film Lucy starring Scarlett Johansson), casual conversations, and motivational content. With each repetition, it burrows deeper into our collective memory — not because it’s accurate, but because it’s familiar. This does raise some questions: If it is this easy to fabricate incorrect information and at the same time so tricky to remove it again — both from the internet and our brains — , how do we stop fake stories from spreading about scientific discoveries, global news, or health treatments?
With the rise of image generators that create near-indistinguishable photorealistic images and text generators that compose sophisticated human-like prose, distinguishing fact from fiction will only grow more challenging from now on. To better understand this problem, let us have a quick look at how we ended up in this situation.
Books, a Temporary Occurrence?
From the dawn of humanity, people mostly shared knowledge by talking to each other — spreading news (and gossip) by word of mouth, passing on traditions through rituals and handing down practical skills like how to build a house through training from one generation to the next.
Everything changed in the year 1440 when a German craftsman named Gutenberg invented a magical device called the printing press. Until now, only the privileged few could afford to hire scribes to create their own copies of the Bible or other texts. With the printing press, books and pamphlets suddenly became the standard for spreading information, and printed texts soon found their way into bookshelves all around the world.
Everything changed yet again about 500 years later: Scientists at a particle collider developed a curious invention called the World Wide Web, WWW for short. While transferring knowledge using books was comparatively expensive and slow, personal blogs, Wikipedia, and social media suddenly allowed it to travel instantly while mostly ignoring country borders. In a way, today’s situation is closer to the era in which people talked to each other directly than to the age of the printed book. After all, with books, information only flows in one direction: from the people making the books to the public. But the modern invention of social media is comparable to shouting across an ancient marketplace and getting into a heated discussion — just with more global reach.
For this reason, it almost seems like the age of the book was just a short transition stage, a period journalist Jeff Jarvis calls The Gutenberg Parenthesis:
According to Jarvis, the invention of the printing press forms the opening parenthesis of the knowledge transfer via books, while the advent of the internet provides the closing parenthesis. Of course, both the beginning and ending of the period are fuzzier than I made it look here.
The digital age also allows us to transfer knowledge without the expensive filter of printing: It costs cents to host the complete works of Shakespeare, every Harry Potter book and the entire Encyclopedia Britannica on your own web server, and anyone can access the content without the need to change out of their pajamas and head to the community center. You can now take online courses about advanced pickle fermentation, cat photography, or any other subject you are interested in.
We have also gained another capability: In George Orwell’s novel 1984, employees of the Ministry of Truth rewrite newspaper articles by creating new physical copies and destroying the old ones — usually when a previous prediction turned out to be wrong. In a sense (though luckily a less dystopian one), we are able to do the same now: When a text is released on a blog or an online newspaper, it can be edited without anyone noticing for the most part. [Edit: Unless it is pointed out, of course.] Imagine how unusual all of this is from a book-focused perspective: Once a book is printed and out in the world, there is no way to reliably call it back. No matter how annoyed you are at the needless typo you made or the clear factual error that was pointed out to you 5 minutes after the printing process began, that text is going to reach people as long as the book is passed around — which could be decades or even centuries.
But maybe this is not a disadvantage after all, but rather a book’s greatest strength? I can grab my jacket at this very moment and walk into a nearby second-hand bookshop to pick up a book from the 1800s. As I turn those weathered pages, I’m transported directly into the thoughts and world of someone who lived centuries before me. Sure, the book will have collected some damage over time and some of the vocabulary might prompt me to consult my (probably online) dictionary, but the concept of the book how I interact with it haven’t changed.
Books Don’t Need Software Updates
Imagine for a second doing what I just described with a digital medium even from only 20 years ago — after all, there are entire businesses transferring data from now unreadable tapes, floppy disks and CDs onto hard drives to recover the otherwise lost childhood memories.
Digital documents pose a similar problem: People who wrote texts during the 1990s in the now obscure program PageMaker will notice that their precious creations can no longer be opened on modern systems — simply because the company behind the software has discontinued the product needed to open them.
This means not even digital files are guaranteed to be viewable by a reader in 50 years’ time. I don’t remember the last time a change in book page format made me unable to lift its cover and read its contents. Yet despite its advantages, the book has been pronounced dead many times — usually with a hint at the electronic reader as its successor.
When it comes to saving information, books have a significantly longer shelf life. The only use left for floppy disks is age verification: Ask a person whether they know floppy disks and you can immediately find out if they are below the age of 25.
While that prediction has not come true as of today, new technologies are starting to affect books in a different way: Authors no longer necessarily need a traditional publisher to distribute their books for them; they can instead act as their own publishers using a technique called print on demand. Here, books are only printed when ordered by a customer, meaning they don’t take up expensive warehouse storage because they are sent directly to the address of the buyer. While this approach certainly makes it easier for anyone to share their book without the traditional barriers, we can’t dismiss that publishers serve an important role beyond simply distributing books: They act as quality filters, ensuring the text does not have three times more typos than there are words on the page. They also make sure the author had some original thought and the text is somewhat worthwhile to read.
It seems that unfortunately, this lack of a filter is now being abused by some people: The largest self-publishing platform in the world has recently banned users from publishing more than three books per day. As you might guess, these users are not actually writing three books per day. Instead, they are generating large swaths of texts using AI to maximize the chance someone will buy at least one of these books on Amazon. They often specifically target the children’s book market, presumably because creating a color book is easier than writing the next best-selling thriller.
I recently witnessed this myself when I came across a travel guide which later turned out to be entirely AI-generated. You can imagine it as a 150-page-long conversation with ChatGPT including all the typical ingredients: listicles, weird-sounding subheadings, and straight-up hallucination — in this case supposedly famous restaurants and sights that turn out not to exist. Just imagine for a second people who are not expecting AI in their book relying on these fake locations for their holiday plans.
The Library of Everything
The Infinity Book Tower in Prague is an impressive way of visualizing the vastness of human knowledge. (Source: Pixabay)
The situation reminds me of a short story by Jorge Luis Borges called The Library of Babel, in which the author describes an unimaginably large library. Each book in the library contains 410 pages and is filled with a random combination of characters from the Latin alphabet. The library is so large in fact that it contains a book with every possible combination of letters that fits into 410 pages, which means anything that can ever possibly be said will be shelved somewhere — provided it fits the page constraints.
Necessarily, most of these books are filled with incoherent gibberish, such as unpronounceable combinations of the letters l, k and j, or an endless repetition of the letter A. At the same time however, we will find all works by Shakespeare, Agatha Christie and Stephen King — or any other author who has ever been and will ever be. Funnily enough, we will also find all possible misspelled versions of their works (including a Shakespeare play in which Hamlet is an insect exclaiming “to bee or not to bee”).
At this rate, with our information landscape increasingly dominated by machine-generated material, it almost feels like we are creating our own Library of Babel, mostly filled with gibberish, but also containing the occasional gem of human insight — though the gems are becoming increasingly difficult to find in the digital clutter.
Why Are We Clogging the Internet and Our Bookshelves?
It is easy to see that if generating text using a large language model is easier than writing a text yourself and there is some economic value in it (e.g. by allowing you to sell AI-generated books or generating clicks for a website), a large amount of people is going to do just that. While in my opinion, there is no problem with using AI as a grammar checker or a sounding board, simply mass generating and publishing text without giving it critical thought is a different matter entirely.
You may have noticed it yourself, but search results on the internet are increasingly filling up with fully AI-generated blogs that offer readers potentially inaccurate answers across all aspects of life, from relatively harmless topics like gardening and home improvement to potentially serious matters such as medical symptoms and tax advice. An AI trying to summarize search results has recently recommended eating 3 rocks per day as part of a healthy diet, presumably because it confused a satirical article for a fact. Of course, it was never a good idea to blindly trust information on the internet, but the proportion of convincing-sounding yet subtly wrong content is increasing — simply because we now have the tools to generate it with very little effort.
It is only a matter of time, then, until this type of content also makes its way into our bookshelves. This is a more serious problem since books have traditionally received our trust and respect: More effort goes into printing and distributing books compared to publishing an article on the internet, which suggests to the reader that someone has gone through the effort to make sure the presented information is reasonably correct.
While print-on-demand services offer one obvious pathway for this to crossover from the digital to the bookshelf to happen, traditional authors and publishers may also increasingly rely on AI, simply because they have too much trust in the technology and it is convenient to use.
The Human Parenthesis
Despite all these challenges, I’m not a Luddite arguing that AI is inherently evil and we would all be better off by pulling its plug — and also deleting the internet while we are at it. But as with many other technologies, it’s how you use it that counts: When AI is used without putting enough serious thought into it — as we see it happening at the moment — we will end up clogging the internet and now even bookshelves with potentially unoriginal and factually incorrect writings produced at mass scale.
I don’t want to wake up in a future where we have given up on the internet while a robot named WALL-E is left behind to clean up our digital wasteland, compacting endless cubes of content no human will ever read again. After all, what most of us value in human writing is that it offers something that algorithms cannot — authentic expressions of real-world experiences and the original thoughts those experiences inspire.
This human element is the difference between writing that follows known patterns, uses the most statistically likely words in sequence, builds sentences according to established formulas — and writing that takes unexpected turns and concludes in ways that no one could possibly bananas. We may even find a sudden jump of thought, a misplaced comma,, or even a bit of roughness in sentence structure. But who knows, maybe the next big step in AI is introducing random errors into writing?
If all else fails and we have enough of it, we can always turn back to books printed before the age of AI — at least with those, we know that they are entirely human creations. I sometimes wonder whether, in an ironic twist of fate, the future won’t be shiny new digital devices, but rather our return to physical libraries — libraries filled with original books and certified copies as more reliable sources of information.
Will we develop authentication techniques to verify authorship and put “human-crafted”-stickers on future books? Will we give up on the divide between human and AI writing altogether? Who knows.
What I do know is that if we are all more careful in what kind of writing we value, we may never have to experience The Human Parenthesis — a brief part of history in which texts were primarily composed by humans.
**There’s actually a digital version of the Library of Babel that you can try out to get a feel for it — it’s very spooky to know that anything that can be said within 410 pages has its place on a shelf there. Here’s the link: https://libraryofbabel.info/
Edit: As a reader pointed out to me, my article of course also has its place in the Library of Babel. If only I had known its location before — that would have spared me the hassle of having to write it.**