The old web is dying, & the new web struggles to be born


Dali plate of eye candy using Googles Deep Dream

The web is changing, and AI is playing a big role in that change. AI systems are capable of generating text and images in abundance, which could potentially overrun or outcompete the platforms we rely on for news, information, and entertainment. However, the quality of this machine-generated content is often poor, and it is built in a way that is parasitical on the web today.

Google is trying to kill the 10 blue links. Twitter is being abandoned to bots and blue ticks. There’s the junkification of Amazon and the enshittification of TikTok. Layoffs are gutting online media. A job posting looking for an “AI editor” expects “output of 200 to 250 articles per week.” ChatGPT is being used to generate whole spam sites. Etsy is flooded with “AI-generated junk.” Chatbots cite one another in a misinformation ouroboros. LinkedIn is using AI to stimulate tired users. Snapchat and Instagram hope bots will talk to you when your friends do not. Redditors are staging blackouts. Stack Overflow mods are on strike. The Internet Archive is fighting off data scrapers, and “AI is tearing Wikipedia apart.” The old web is dying, and the new web struggles to be born.

The problem, in extremely broad strokes, is this. Years ago, the web used to be a place where individuals made things. They made homepages, forums, and mailing lists, and a small bit of money with it. Then companies decided they could do things better. They created slick and feature-rich platforms and threw their doors open for anyone to join. They put boxes in front of us, and we filled those boxes with text and images, and people came to see the content of those boxes. The companies chased scale, because once enough people gather anywhere, there is usually a way to make money off them. But AI changes these assumptions.
Google Search underwrites the economy of the modern web, distributing attention and revenue for much of the internet. Google has been spurred into action by the popularity of Bing AI and ChatGPT as alternative search engines, and it’s experimenting with replacing its traditional 10 blue links with AI-generated summaries. But if the company goes ahead with this plan, then the changes would be seismic.

A writeup of Google’s AI search beta from Avram Piltch, editor-in-chief of tech site Tom’s Hardware, highlights some of the problems. Piltch says Google’s new system is essentially a “plagiarism engine.” Its AI-generated summaries often copy text from websites word-for-word but place this content above source links, starving them of traffic. It’s a change that Google has been pushing for a long time, but look at the screenshots in Piltch’s piece and you can see how the balance has shifted firmly in favour of excerpted content. If this new model of search becomes the norm, it could damage the entire web, writes Piltch. Revenue-strapped sites would likely be pushed out of business and Google itself would run out of human-generated content to repackage. 

AI dynamics — producing cheap content based on others’ work — that is underwriting this change, and if Google goes ahead with its current AI search experience, the effects would be difficult to predict. Potentially, it would damage whole swathes of the web that most of us find useful — from product reviews to recipe blogs, hobbyist homepages, news outlets, and wikis. Sites could protect themselves by locking down entry and charging for access, but this would also be a huge reordering of the web’s economy. In the end, Google might kill the ecosystem that created its value, or change it so irrevocably that its own existence is threatened. 

Google is also experimenting with AI-generated summaries for its search results. This could have a significant impact on the web, as it would favour sites that produce cheap content based on others’ work. This could damage whole swathes of the web that most of us find useful, such as product reviews, recipe blogs, and news outlets.

The evidence so far suggests it will degrade the quality of the web in general. As Piltch notes in his review, for all AI’s vaunted ability to recombine text, it is people who ultimately create the underlying data — whether that’s journalists picking up the phone and checking facts or Reddit users who have had exactly that battery issue with the new cordless ratchet and are happy to tell you how they fixed it. By contrast, the information produced by AI language models and chatbots is often incorrect. The tricky thing is that when it is wrong, it is wrong in ways that are difficult to spot. 

In the end, the future of the web is uncertain. It is possible that AI will lead to a degradation of the quality of information available online. However, it is also possible that AI will be used to create new and innovative forms of content. Only time will tell what the future holds.

Leave a comment