Allow me to explain my toilet theory from the internet. The premise, while unprovable, is quite simple: at any given moment, much of the teeming, frenetic activity we experience online (clicks, views, posts, comments, likes, and shares) comes from people clicking scroll their screen. telephones in the bathroom.
Of course, the toilet theory is not necessarily literal. Mindless scrolling isn’t limited to the bathroom, and other down moments also involve a lot of inactive or bored swiping while waiting in line or sitting in heavy traffic. Right now, someone somewhere is probably reading an article or liking an Instagram post with a phone in one hand and an irritable child in the other.
Above all, the toilet theory reminds me that the Internet is a huge place that is visited countless times every day by billions of people in between and during all the mundane things they have to do. As a writer, I use this framework to check my ego and remember that I have very little time to engage a reader with whatever I’m trying to get them to read – but also that my imagined audience of non-distracted, fully engaged readers an idealized one. I’m distracted like everyone else: sometimes I read deeply, but most of my non-work surfing consists of mindlessly scrolling through clickable articles to find the bit that catches my attention, or picking out a typo-filled sentence about a product for home improvement in Google while walking from the parking lot to Lowe’s and was almost hit by a vehicle.
I’ve been thinking about my toilet theory this week, after Google announced its new suite of tools for generative AI, including an updated version of its search engine that will “do the Googling for you.” The company has been experimenting with using generative AI at the top of search results for a while now, with mixed results: Occasionally the service “hallucinates” and confidently answers questions with made-up or incorrect information. Now the company is adding “AI Summaries,” a way for the company to collect and sort information in response to a question. (If you’re looking for a restaurant, the option can be sorted by different categories, such as the atmosphere offered.) Ultimately, generative search simply summarizes information from sources around the web and presents it to people in an easily digestible format.
Organizations that rely on Google to send people to their websites (publishers, for example) are concerned about this shift. Analytics companies have dubbed such searches “zero-click searches”: if the answer is right there in the search results, why would most people want to follow a link to the website where the summary came from? And publishers have reason to be wary. Over the past fifteen years, the Internet has been remade in Google’s image, leading to the creation of an entire cottage industry of search engine optimization dedicated to studying subtle shifts in the company’s algorithms and then, in some fallen, gaming. to try to rank higher in Google results. Once beloved, a consensus has recently begun to form: People, including search experts, believe that the quality of Google’s results has deteriorated, thanks in part to the abundance of low-quality SEO bait.
Google doesn’t seem concerned. Liz Reid, the company’s head of search, wrote on the company’s blog that “the links in AI Overviews generate more clicks than if the page appeared as a traditional web listing for that search.” And in an interview with the Associated Press, Reid argued: “The reality is that people do want to click to the web, even if they have an AI overview. They start with the AI overview and then want to dig deeper into it.” She also noted that Google will try to use the tool to “send the most useful traffic to the web.” The implication is that Google would rather not destroy the Internet. If people are no longer encouraged to publish information, where will the AI get its answers?
But the quote from Reid that I find most illuminating is one she said earlier this week. “People’s time is valuable, right? They are dealing with difficult things,” she said Wired. “If you have the ability with technology to help people get answers to their questions, to get more work out of it, why wouldn’t we want to pursue that?” Although I doubt she would put it that way, Reid gave her own definition of the toilet theory. People use Google to find information in emergencies: the average Googler looks less like an opposition researcher or a librarian and more like a concerned parent typing barely comprehensible sentences into their phone’s browser, along the lines of milk bird flu safe? Some people spend a lot of time searching as deeply as possible and sifting through search results to compare information. But a recent analysis shows that most people visit just one page when they Google; The same analysis found that about half of all search sessions are completed in less than a minute. For this reason, it is in the Company’s best interest to make using the Site as quick and hassle-free as possible.
Yet this is a sensitive topic for the search giant. People are naturally wary of generative AI, but the perception that Google could work better for some people by simply giving them an answer rather than expecting them to click to another website has also been an issue in antitrust complaints against the company. It’s no surprise that Google has gone to some lengths to explain how new technology can be used to encourage more web browsing. The company also unveiled LearnLM, an AI feature that the company says could function like a tutor, breaking down information people encounter while using Google services like Search and YouTube. In an interview, a Google executive told my colleague Matteo Wong that LearnLM is “an interactive information experience,” one that serves users who want more than a summary and are more likely to click on a plethora of links. Whether LearnLM and similar products work as described is an open question, as is whether people will want to collaborate with a large language model to do research (or whether they will enable the feature at all).
I can’t claim to know Google’s true ambitions, but recent history has shown that tech companies often paint a rosy, unrealistic picture of how people will actually use their products. I’m reminded of Facebook’s move, beginning in 2017, to shift the company’s focus from the news feed to groups and private “meaningful communities.” To celebrate, Mark Zuckerberg gave a speech highlighting many of the uplifting communities on the platform: support groups for women and disabled veterans, groups for fans of the video game Civilization. He said the company would use AI technology to recommend groups to people based on their interests. “When you bring people together, you never know where it will lead,” he told the crowd.
The quote turned out to be telling. A lasting legacy of Facebook’s community hub is that it effectively helped connect large groups of vaccine skeptics, election deniers, and disinformation peddlers, who could then coordinate and pollute the Internet with lies and propaganda. In 2020, Facebook began removing or restricting thousands of QAnon-related groups and pages, some with thousands of users, after the conspiracy movement on the site grew unchecked. Just before the 2020 election, the company became involved when an FBI complaint revealed that a plot to kidnap Michigan Governor Gretchen Whitmer was organized in part in a Facebook group.
Likewise, sales pitches about generative AI tend to emphasize the products as assistants and productivity tools. ChatGPT and other chatbots are romanticized as creative partners and sounding boards: ways to stress-test ideas or eliminate busy work. In some cases that is certainly true, but a plethora of examples from schools and universities show that many students see the products as a shortcut, a way to cheat and escape the drudgery of writing term papers. Likewise, content farms don’t use the tools as creative partners; they use generative AI to replace writers and produce questionable nonsense optimized for search engines (which Google could eventually summarize with its own generative AI). In this case, what is marketed as an intelligent productivity tool is actually a race to the bottom – a race that could lead to the Internet eating itself.
And that brings us back to my toilet theory. My intention is not to scold or moralize; this is just an attempt to see the internet for what it is: a collection of people using the services at their disposal to get through their busy, messy lives.
As I watch Google roll out these tools, knowing full well how people will use the products in the real world, I struggle to find any logic beyond a cynical short-term profit motive (or the desire so as not to be seen as losing the AI). race). Google’s zero-click effect could soon create a CliffNotes version of the web, and any attempt to prevent this would likely mean turning away from generative AI altogether.
It’s also possible (and somewhat scary) that Google doesn’t see a future for the Internet at all, at least not for the Internet as we know it. In an interview last year with The edgeCEO Sundar Pichai extolled the virtues of the web page-based Internet, but also came up with a line that struck me when I revisited it this week. “Mobile has come, video is here to stay and so there will be a lot of different types of content. The web is no longer central to everything as it once was. And I think that’s been the case for a while.” What’s not being said here is that the web may no longer be at the center of everything because of Google, the slow decline of the search network and its power over how and what websites publish.
Google depends on the Internet: the endless array of sites it indexes to “organize the world’s information.” But what happens to the Internet when Google feels that it has succeeded in accomplishing the task outlined in its mission statement? Maybe we’ll find out soon.