Daily Weekly Monthly

Daily Shaarli

May 31, 2024

The 'Dead-Internet Theory' Is Wrong but Feels True - The Atlantic

Maybe You Missed It, but the Internet ‘Died’ Five Years Ago

A conspiracy theory spreading online says the whole internet is now fake. It’s ridiculous, but possibly not that ridiculous?

By Kaitlyn Tiffany

If you search the phrase i hate texting on Twitter and scroll down, you will start to notice a pattern. An account with the handle @pixyIuvr and a glowing heart as a profile picture tweets, “i hate texting i just want to hold ur hand,” receiving 16,000 likes. An account with the handle @f41rygf and a pink orb as a profile picture tweets, “i hate texting just come live with me,” receiving nearly 33,000 likes. An account with the handle @itspureluv and a pink orb as a profile picture tweets, “i hate texting i just wanna kiss u,” receiving more than 48,000 likes.

There are slight changes to the verb choice and girlish username and color scheme, but the idea is the same each time: I’m a person with a crush in the age of smartphones, and isn’t that relatable? Yes, it sure is! But some people on Twitter have wondered whether these are really, truly, just people with crushes in the age of smartphones saying something relatable. They’ve pointed at them as possible evidence validating a wild idea called “dead-internet theory.”

Let me explain. Dead-internet theory suggests that the internet has been almost entirely taken over by artificial intelligence. Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls, and idly curious lovers of chitchat. One might, for example, point to @_capr1corn, a Twitter account with what looks like a blue orb with a pink spot in the middle as a profile picture. In the spring, the account tweeted “i hate texting come over and cuddle me,” and then “i hate texting i just wanna hug you,” and then “i hate texting just come live with me,” and then “i hate texting i just wanna kiss u,” which got 1,300 likes but didn’t perform as well as it did for @itspureluv. But unlike lots of other online conspiracy theories, this one has a morsel of truth to it. Person or bot: Does it really matter?

Read: The internet is mostly bots

Dead-internet theory. It’s terrifying, but I love it. I read about it on Agora Road’s Macintosh Cafe, an online forum with a pixelated-Margaritaville vibe and the self-awarded honor “Best Kept Secret of the Internet!” Right now, the background is a repeated image of palm trees, a hot-pink sunset, and some kind of liquor pouring into a rocks glass. The site is largely for discussing lo-fi hip-hop, which I don’t listen to, but it is also for discussing conspiracy theories, which I do.

In January, I stumbled across a new thread there titled “Dead Internet Theory: Most of the Internet is Fake,” shared by a user named IlluminatiPirate. Over the next few months, this would become the ur-text for those interested in the theory. The post is very long, and some of it is too confusing to bother with; the author claims to have pieced together the theory from ideas shared by anonymous users of 4chan’s paranormal section and another forum called Wizardchan, an online community premised on earning wisdom and magic through celibacy. (In an email, IlluminatiPirate, who is an operations supervisor for a logistics company in California, told me that he “truly believes” in the theory. I agreed not to identify him by name because he said he fears harassment.)

Peppered with casually offensive language, the post suggests that the internet died in 2016 or early 2017, and that now it is “empty and devoid of people,” as well as “entirely sterile.” Much of the “supposedly human-produced content” you see online was actually created using AI, IlluminatiPirate claims, and was propagated by bots, possibly aided by a group of “influencers” on the payroll of various corporations that are in cahoots with the government. The conspiring group’s intention is, of course, to control our thoughts and get us to purchase stuff.

As evidence, IlluminatiPirate offers, “I’ve seen the same threads, the same pics, and the same replies reposted over and over across the years.” He argues that all modern entertainment is generated and recommended by an algorithm; gestures at the existence of deepfakes, which suggest that anything at all may be an illusion; and links to a New York story from 2018 titled “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.” “I think it’s entirely obvious what I’m subtly suggesting here given this setup,” the post continues. “The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population.” So far, the original post has been viewed more than 73,000 times.

Read: Artificial intelligence is misreading human emotion

Obviously, the internet is not a government psyop, even though the Department of Defense had a role in its invention. But if it were, the most compelling evidence to me is the dead-internet theory’s observation that the same news items about unusual moon-related events seem to repeat year after year. I swear I’ve been saying this for years. What is a super flower blood moon? What is a pink supermoon? A quick search of headlines from just this month brings up: “There’s Something Special About This Weekend’s Moon,” “Don’t Miss: Rare, Seasonal ‘Blue Moon’ Rises Tonight,” and “Why This Weekend’s Blue Moon Is Extra Rare.” I just don’t understand why everyone is so invested in making me look at the moon all the time? Leave me alone about the moon!

Dead-internet theory is a niche idea because it’s patently ridiculous, but it has been spreading. Caroline Busta, the Berlin-based founder of the media platform New Models, recently referenced it in her contribution to an online group show organized by the KW Institute for Contemporary Art. “Of course a lot of that post is paranoid fantasy,” she told me. But the “overarching idea” seems right to her. The theory has become fodder for dramatic YouTube explainers, including one that summarizes the original post in Spanish and has been viewed nearly 260,000 times. Speculation about the theory’s validity has started appearing in the widely read Hacker News forum and among fans of the massively popular YouTube channel Linus Tech Tips. In a Reddit forum about the paranormal, the theory is discussed as a possible explanation for why threads about UFOs seem to be “hijacked” by bots so often.

The theory’s spread hasn’t been entirely organic. IlluminatiPirate has posted a link to his manifesto in several Reddit forums that discuss conspiracy theories, including the Joe Rogan subreddit, which has 709,000 subscribers. In the r/JoeRogan comments, users argue sarcastically—or sincerely?—about who among them is a bot. “I’m absolutely the type of loser who would get swindled into living among bots and never realize it,” a member of the 4chan-adjacent Something Awful forum commented when the theory was shared there in February. “Seems like something a bot would post,” someone replied. Even the playful arguments about how everything is the same are the same.

Read: Why is Joe Rogan so popular?

That particular conversation continued down the bleakest path imaginable, to the point of this comment: “If I was real I’m pretty sure I’d be out there living each day to the fullest and experiencing everything I possibly could with every given moment of the relatively infinitesimal amount of time I’ll exist for instead of posting on the internet about nonsense.”

Anyway … dead-internet theory is pretty far out-there. But unlike the internet’s many other conspiracy theorists, who are boring or really gullible or motivated by odd politics, the dead-internet people kind of have a point. In the New York story that IlluminatiPirate invokes, the writer Max Read plays with paranoia. “Everything that once seemed definitively and unquestionably real now seems slightly fake,” he writes. But he makes a solid argument: He notes that a majority of web traffic probably comes from bots, and that YouTube, for a time, had such high bot traffic that some employees feared “the Inversion”—the point when its systems would start to see bots as authentic and humans as inauthentic. He also points out that even engagement metrics on sites as big and powerful as Facebook have been grossly inflated or easily gamed, and that human presence can be mimicked with click farms or cheap bots.

Some of this may be improving now, for better or for worse. Social-media companies have gotten a lot better at preventing the purchase of fake views and fake likes, while some bot farmers have, in response, become all the more sophisticated. Major platforms still play whack-a-mole with inauthentic activity, so the average internet user has no way of knowing how much of what they see is “real.”

But more than that, the theory feels true: Most weeks, Twitter is taken over by an argument about how best to practice personal hygiene, or which cities have the worst food and air quality, which somehow devolves into allegations of classism and accusations of murder, which for whatever reason is actually not as offensive as classism anymore. A celebrity is sorry. A music video has broken the internet. A meme has gotten popular and then boring. “Bennifer Might Be Back On, and No One’s More Excited Than Twitter.” At this point, you could even say that the point of the theory is so obvious, it’s cliché—people talk about longing for the days of weird web design and personal sites and listservs all the time. Even Facebook employees say they miss the “old” internet. The big platforms do encourage their users to make the same conversations and arcs of feeling and cycles of outrage happen over and over, so much so that people may find themselves acting like bots, responding on impulse in predictable ways to things that were created, in all likelihood, to elicit that very response.

Thankfully, if all of this starts to bother you, you don’t have to rely on a wacky conspiracy theory for mental comfort. You can just look for evidence of life: The best proof I have that the internet isn’t dead is that I wandered onto some weird website and found an absurd rant about how the internet is so, so dead.

Disrupting deceptive uses of AI by covert influence operations | OpenAI

Disrupting deceptive uses of AI by covert influence operations

We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services.

OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content. That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.

In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services.

This blog describes the threat actors we disrupted, attacker trends we identified, and important defensive trends - including how designing AI models with safety in mind in many cases prevented the threat actors from generating the content they desired, and how AI tools have made our own investigations more efficient. Alongside this blog, we are publishing a trend analysis that describes the behavior of these malicious actors in detail.

Read the full report(opens in a new window)

Threat actors work across the internet. So do we. By collaborating with industry, civil society, and government we tackle the creation, distribution, and impact of IO content. Our investigations and disruptions were made possible in part because there’s been so much detailed threat reporting over the years by distribution platforms and the open-source community. OpenAI is publishing these findings, as other tech companies do, to promote information sharing and best practices amongst the broader community of stakeholders.

Disruption of covert influence operations

Over the last three months, our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts.

Specifically, we disrupted:

  • A previously unreported operation from Russia, which we dubbed Bad Grammar, operating mainly on Telegram and targeting Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used our models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on Telegram.
  • An operation originating in Russia known as Doppelganger(opens in a new window). People acting on behalf of Doppelganger used our models to generate comments in English, French, German, Italian and Polish that were posted on X and 9GAG; translate and edit articles in English and French that were posted on websites linked to this operation; generate headlines; and convert news articles into Facebook posts.
  • A Chinese network known as Spamouflage(opens in a new window), which used our models to research public social media activity, generate texts in languages including Chinese, English, Japanese and Korean that were then posted across platforms including X, Medium and Blogspot, and debug code for managing databases and websites, including a previously unreported domain, revealscum[.]com.
  • An Iranian operation known as the International Union of Virtual Media(opens in a new window) (IUVM), which used our models to generate and translate long-form articles, headlines and website tags that were then published on a website linked to this Iranian threat actor, iuvmpress[.]co;
  • Activity by a commercial company in Israel called STOIC, because technically we disrupted the activity, not the company. We nicknamed this operation Zero Zeno, for the founder of the stoic school of philosophy. The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation.

The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.

So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services. Using Brookings’ Breakout Scale,(opens in a new window) which assesses the impact of covert IO on a scale from 1 (lowest) to 6 (highest), none of the five operations included in our case studies scored higher than a 2 (activity on multiple platforms, but no breakout into authentic communities).

Attacker trends

Based on the investigations into influence operations detailed in our report, and the work of the open-source community, we have identified the following trends in how covert influence operations have recently used artificial intelligence models like ours.

  • Content generation: All these threat actors used our services to generate text (and occasionally images) in greater volumes, and with fewer language errors than would have been possible for the human operators alone.
  • Mixing old and new: All of these operations used AI to some degree, but none used it exclusively. Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts or memes copied from across the internet.
  • Faking engagement: Some of the networks we disrupted used our services to help create the appearance of engagement across social media - for example, by generating replies to their own posts. This is distinct from attracting authentic engagement, which none of the networks we describe here managed to do to a meaningful degree.
  • Productivity gains: Many of the threat actors that we identified and disrupted used our services in an attempt to enhance productivity, such as summarizing social media posts or debugging code.

Defensive trends

While much of the public debate so far has focused on the potential or actual use of AI by attackers, it is important to remember the advantages that AI offers to defenders. Our investigations also benefit from industry sharing and open-source research.

  • Defensive design: We impose friction on threat actors through our safety systems, which reflect our approach to responsibly deploying AI. For example, we repeatedly observed cases where our models refused to generate the text or images that the actors asked for.
  • AI-enhanced investigation: Similar to our approach to using GPT-4 for content moderation and cyber defense, we have built our own AI-powered tools to make our detection and analysis more effective. The investigations described in the accompanying report took days, rather than weeks or months, thanks to our tooling. As our models improve, we’ll continue leveraging their capabilities to improve our investigations too.
  • Distribution matters: Like traditional forms of content, AI-generated material must be distributed if it is to reach an audience. The IO posted across a wide range of different platforms, including X, Telegram, Facebook, Medium, Blogspot, and smaller forums, but none managed to engage a substantial audience.
  • Importance of industry sharing: To increase the impact of our disruptions on these actors, we have shared detailed threat indicators with industry peers. Our own investigations benefited from years of open-source analysis conducted by the wider research community.
  • The human element: AI can change the toolkit that human operators use, but it does not change the operators themselves. Our investigations showed that these actors were as prone to human error as previous generations have been - for example, publishing refusal messages from our models on social media and their websites. While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.

We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use. Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.