About a month ago, I wrote about a viral book of “lost” herbal remedies that had sold 60,000 copies on the TikTok store at the time, despite appearing to violate some of the app’s policy on health misinformation. Sales of the book were boosted by popular videos from wellness influencers on the app, some with millions of views, falsely claiming that the once-obscure 2019 book contained natural cures for cancer and other ailments.
The influencers, together with TikTok, made money from the sale of this misleading book. I brought all this to TikTok’s attention. The videos, which I reported to a company spokesperson, were removed after a review for violating TikTok’s policy banning health misinformation.
The book remained for sale in stores and new influencers emerged. Still, I kept seeing TikTok Shop promotions for this book, The Lost Book of Herbal Remedies, since.
“This right here is the reason they are trying to ban this book,” said a TikTok Shop seller, pointing to the list of herbal cancer treatments in the book. He later urged his viewers to click through a link to the store listing and make a purchase right away, because “this one probably won’t last forever because of what’s in it.”
The video was viewed more than 2 million times in two days. Click on the link as indicated and you will see that sales of the book have doubled since my article came out. The Lost Book of Herbal Remedies has sold more than 125,000 copies on TikTok alone through the TikTok Shop e-commerce platform. However, the book’s popularity doesn’t stop there: as of June 5, it is the sixth best-selling book on Amazon and has been on Amazon’s bestseller list for seven weeks.
The ‘invisible rulers’ of online attention
I thought about my experience digging in the The Lost Book of Herbal Remedies while reading the upcoming book Invisible rulersby Stanford Internet Observatory researcher Renee DiResta. The book explores and contextualises how bad information and ‘tailor-made realities’ became so powerful and prominent online. She charts how the “collision of the rumor mill and the propaganda machine” on social media has helped create a trinity of influencers, algorithms, and crowds that work symbiotically to create pseudo-events, Twitter protagonists, and conspiracy theories that fuel attention. have pulled to catapult. and shattered consensus and trust.
DiResta’s book is part history, part analysis, and part memoir, as it ranges from pre-Internet investigations into the psychology of rumor and propaganda to the social media age’s greatest moments of online conspiracy and intimidation. Ultimately, DiResta applies what she’s learned in a decade of painstaking research into online disinformation, manipulation, and abuse to her personal experience as the target of a series of baseless accusations that, despite the lack of evidence, led to Rep. Jim Jordan’s removal as chairman of the Federal Government’s Armaments Subcommittee of the House of Representatives, to initiate an investigation.
I think many people have a very understandable instinct when they read about online disinformation or disinformation: they want to know why it’s happening and who is to blame, and they want the answer to be easy. Hence the meme-like arguments about “Russian bots” that helped Trump win the 2016 presidential election. Or maybe he’s trying to deplatform one person who went viral by saying something wrong and harmful. Or the belief that we can completely moderate online harm.
DiResta’s book explains why these approaches will always fall short. It may be satisfying to blame the “algorithm” for a dangerous viral trend, but the algorithm has never worked without human choice. As DiResta writes, “virality is collective behavior.” Algorithms can surface, nudge, and get confused, but to do that effectively they need user data.
Similarities, panic and prevention
Writing about individual viral rumors, conspiracy theories, and products can sometimes feel like telling parables: The Lost book of herbal medicines becomes educational about the ability of anything to become a TikTok Shop bestseller, as long as the influencers pushing the product are good enough at it.
Most of these parables in the disinformation space do not have a beautiful or happy ending. Disinformation reporter Ali Breland wrote about how QAnon became “everything” in his latest piece for Mother Jones. To do this, Breland begins with the parable of Wayfair, the cheap furniture retailer that became the center of a moral panic over pedophiles.
This moment in online panic history, which also features heavily in DiResta’s book, occurred in the summer of 2020, after many QAnon influencers and activity hubs were banned from mainstream social media (which, incidentally, I interviewed DiResta about for a while at the time). piece that questions whether such a move occurred too late to have any meaningful effect on QAnon’s influence).
Here’s what happened: Someone online noticed that Wayfair was selling expensive cabinets. The castes had feminine names. The person drew some mental dots and connected them: these lists must undoubtedly be coded evidence of a child trafficking ring. The idea caught fire in QAnon spaces and quickly spread beyond paranoia enclaves. The wild and debunked idea co-opted a real hashtag used to raise awareness about actual human trafficking, disrupting real investigations.
Breland describes in his Mother Jones piece how the central tenets of the QAnon conspiracy theory reached far beyond the believers and stayed there. Now, “[W]“We are in an age of obsessive, strange, and widespread fear of pedophilia — an age where the paranoid thinking of QAnon is no longer confined to the political margins of middle-aged posters and boomers terminally lost in the cyber world,” he wrote .
Wayfair’s moral panic hasn’t become a trend purely because of bad algorithms; it was evidence that the attention QAnon had previously attracted had worked. Ban the hashtags and their influencers, but the crowd persisted, and to some extent we were part of it.
The Lost book of herbal medicines became a bestseller by flowing through some well-worn grooves. The influencers who promoted it knew what they could and couldn’t say from a moderation standpoint, and as those who broke the rules were removed, new influencers showed up to earn those commissions. My article and my efforts to bring this trend to TikTok’s attention haven’t really done anything to slow the demand for this inaccurate book. So, what would work?
DiResta’s ideas for this reflect conversations that have been happening among disinformation experts for some time. There are a number of things that platforms should absolutely do from a moderation perspective, such as removing automated trending topics, introducing friction when interacting with certain online content, and generally giving users more control over what they in their feeds and in their communities. DiResta also points to the importance of education and prebunking, a more preventive version of tackling false information that focuses on the tactics and tropes of online manipulation. Also transparency.
Would people be more likely to believe that there isn’t a vast conspiracy to censor conservatives on social media if there were a public database of platforms’ moderation actions? Would people be less eager to buy a book of questionable natural cures if they knew more about the commissions the influencers earned for promoting it? Don’t know. Maybe!
But I do know this: After a decade of covering online culture and information manipulation, I don’t think I’ve ever seen things as bad as they are right now. It’s worth trying at least something.