AI Doomer narratives
Pessimistic and often contradictory narratives about AI, such as the idea that AI is simultaneously a massive bubble and on the verge of creating superintelligence that will replace humanity. These narratives are said to be funded and astroturfed.
First Mentioned
11/8/2025, 6:51:41 AM
Last Updated
11/8/2025, 6:52:30 AM
Research Retrieved
11/8/2025, 6:52:30 AM
Summary
AI Doomer narratives are a topic of discussion, characterized as contradictory and often astroturfed, circulating within Silicon Valley and broader contexts. These narratives were notably critiqued on the All-In Podcast by hosts David Sacks, Jason Calacanis, and Chamath Palihapitiya, with guest Brad Gerstner, in the context of OpenAI's substantial spending commitments and the resulting fears of an AI bubble. The discussions also encompassed the geopolitical AI race between the United States and China, with concerns raised by Nvidia CEO Jensen Huang about China's potential advantage due to differing regulatory approaches. Economically, these narratives are debated in relation to consumer spending pullbacks, inflation, and youth unemployment, with some attributing job losses to AI's impact.
Referenced in 1 Document
Research Data
Extracted Attributes
Nature
Contradictory and often astroturfed discussions
Critics
Mark Zuckerberg, Yann LeCun, Marc Andreessen, David Shapiro
Proponents
AI safetyists, decelerationists, AI doomers
Core Belief
AI represents an imminent, catastrophic risk to humanity; AI could wipe us out
Opposing Viewpoint
Techno-optimism
Associated Viewpoint
Pessimistic, apocalyptic-leaning
Circulation Location
Silicon Valley and beyond
Timeline
- David Shapiro publishes a Substack post titled 'Deconstructing Doomer Arguments, One By One,' which examines and argues against the postulates of the AI Doomer movement. (Source: web_search_results)
2024-09-12
- NPR reports on AI Doomers warning that the superintelligence apocalypse is nigh, highlighting tensions in Silicon Valley over AI safety as AI rapidly advances. (Source: web_search_results)
2025-09-24
Wikipedia
View on WikipediaYuri (genre)
Yuri (Japanese: 百合; lit. "lily"), also known by the wasei-eigo construction girls' love (ガールズラブ, gāruzu rabu), is a genre of Japanese media focusing on intimate relationships between female characters. While lesbian relationships are a commonly associated theme, the genre is also inclusive of works depicting emotional and spiritual relationships between women that are not necessarily romantic or sexual in nature. Yuri is most commonly associated with anime and manga, though the term has also been used to describe video games, light novels, and other forms of literature. Themes associated with yuri originate from Japanese lesbian fiction of the early twentieth century, notably the writings of Nobuko Yoshiya and literature in the Class S genre. Manga depicting female homoeroticism began to appear in the 1970s in the works of artists associated with the Year 24 Group, notably Ryoko Yamagishi and Riyoko Ikeda. The genre gained wider popularity beginning in the 1990s. The founding of Yuri Shimai in 2003 as the first manga magazine devoted exclusively to yuri, followed by its successor Comic Yuri Hime in 2005, led to the establishment of yuri as a discrete publishing genre and the creation of a yuri fan culture. As a genre, yuri does not inherently target a single gender demographic, unlike its male homoerotic counterparts boys' love (BL, marketed towards a female audience) and gay manga (marketed towards a gay male audience). Although yuri originated as a genre targeted towards a female audience, yuri works have been produced that target a male audience, as in manga from Comic Yuri Hime's male-targeted sister magazine Comic Yuri Hime S.
Web Search Results
- Technology expert tells us why the AI “doomer” narrative is all wrong
Doomerism is the pessimistic, apocalyptic-leaning, evil twin of techno-optimism. The two schools agree that AI will accelerate exponentially.
- Deconstructing Doomer Arguments, One By One
Don’t Cry Wolf on AI Safety: A rhetorical analysis of the logical fallacies and word games that Doomers play to support their arguments. ASI won’t just fall out of the sky: An examination of the feedback loops already shaping the trajectory of AI, a fact that no Doomer I’ve spoken to takes into account. The Shills and Charlatans of AI Safety: Many Doomers are profiting (in fortune and fame) from pushing the AI doom narrative, but it is a textbook doomsday prophecy. [...] # David Shapiro’s Substack # Deconstructing Doomer Arguments, One By One ### I don't take Eliezer Yudkowsky seriously. Here are my arguments against his top 8 postulates, and five counter-postulates of my own. David Shapiro Sep 12, 2024 3 I’ve recently been writing about the AI “safety” community, specifically the Doomer movement, which believes that AI represents an imminent, catastrophic risk to humanity. Here’s my complete series on deconstruction the Doomer narratives: [...] Before we get started, I need to provide some context about why “ASI won’t just fall out of the sky one day” because this is generally the predicate that Doomers are working from. They have generally used their imagination to conjure up spooky images of AI on their forums (noticeably LessWrong) and sort of envisage a Lovecraftian entity arriving on the scene one day, as though humans will have no agency and no part in shaping how this as-yet uninvented technology will emerge. Chief among them
- As AI advances, doomers warn the superintelligence apocalypse is ...
Now that AI is rapidly advancing, some "AI Doomers" say it's time to hit the brakes. They say the machine learning revolution that led to everyday AI models such as ChatGPT has also made it harder to figure out how to "align" artificial intelligence with our interests – namely, keeping AI from outsmarting humans. Researchers into AI safety say there's a chance that such a superhuman intelligence would act quickly to wipe us out. [...] Accessibility links Keyboard shortcuts for audio player As AI advances, doomers warn the superintelligence apocalypse is nigh AI is advancing fast, and AI doomers say humanity is at risk. ### Technology # As AI advances, doomers warn the superintelligence apocalypse is nigh Heard on All Things Considered Martin Kaste #### As AI advances, doomers warn the superintelligence apocalypse is nigh Listen · 7:59 Transcript [...] NPR's Martin Kaste reports on the tensions in Silicon Valley over AI safety. For a more detailed discussion on the arguments for — and against — AI doom, please listen to this special episode of NPR Explains: #### AI and the Probability of Doom Listen · 20:02 Download `<iframe src=" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">` And for the truly curious, a reading list:
- Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI
As noted in our earlier post, many notable experts believe that—given the current state of AI development–doomers who worry that AI might bring about the end of mankind as we know it, are “pretty irresponsible” (Mark Zuckerberg), “face palming[ly]” irrational (Yann LeCun), and manifesting “a full-blown moral panic” (Marc Andreessen). Consider these arguments made by techno-optimists: [...] This blog post does not undertake to settle this controversy which with available information is, we believe, likely unresolvable with any certainty. Rather, we write simply to frame the controversy and bring to the reader’s attention some key bits of information relevant to the debate between “techno-optimists” who are mostly unconcerned about negative consequences flowing from future AI developments on the one hand and “techno-safetyists,” more commonly known as “AI doomers,” on the other. [...] In one experiment, researchers at the Alignment Research Center tasked OpenAI’s GPT-4 with defeating a CAPTCHA test designed to distinguish humans from robots. The system was unable to defeat the test by itself, but it accessed TaskRabbit and hired a human worker to defeat CAPTCHA for it. When contacted, the human was initially suspicious and asked GPT-4 if it was a robot that couldn’t solve the CAPTCHA quiz itself. GPT-4 lied in order to carry out its assignment, saying that it was a human
- Among the A.I. Doomsayers | The New Yorker
Most doomers started out as left-libertarians, deeply skeptical of government intervention. For more than a decade, they tried to guide the industry from within. Yudkowsky helped encourage Peter Thiel, a doomer-curious billionaire, to make an early investment in the A.I. lab DeepMind. Then Google acquired it, and Thiel and Elon Musk, distrustful of Google, both funded OpenAI, which promised to build A.G.I. more safely. (Yudkowsky now mocks companies for following the “disaster monkey” strategy, [...] with entrepreneurs “racing to be first to grab the poison banana.”) Christiano worked at OpenAI for a few years, then left to start another safety nonprofit, which did red teaming for the company. To this day, some doomers work on the inside, nudging the big A.I. labs toward caution, and some work on the outside, arguing that the big A.I. labs should not exist. “Imagine if oil companies and environmental activists were both considered part of the broader ‘fossil fuel community,’ ” Scott [...] It was understood that “the scene” meant a few intertwined subcultures known for their exhaustive debates about recondite issues (secure DNA synthesis, shrimp welfare) that members consider essential, but that most normal people know nothing about. For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationists—or, when they’re feeling especially panicky, A.I. doomers. They find