Fountain of Filth No. 2

Doommaxxing: How fearmongering laid the groundwork for a seriously threatening AI

by Olive Jones

On February 9th, Anthropic AI Safety Chief Mrinank Sharma abruptly resigned from his position at the company, sharing his letter of resignation with the public on X[1]. In the letter, which is compellingly introspective and poetic, Sharma warns that “The world is in peril.”[2] For those who have been warning of the mortal threat AI poses to humanity for years, this announcement felt like a vindication of their warning as well as a realization of their greatest fears. However, Mrinank’s warning is the latest stage in a narrative that has been running through AI safety since at least the early 2010’s. This narrative is not one of a world in peril from a superhuman AI; It is a cautionary tale of what happens when the ability to control and develop powerful new technology is consolidated in the hands of a few rich capitalists. It is a narrative about the horrible result of public research and development gone private. Perhaps most importantly, it is a narrative about how our fears can be used to control us.

For anyone who has been following the development of AI since the early 10’s, it’s easy to take Mrinank’s letter as another false alarm raised by an overly anxious tech bro. The subway was already littered with ads for Nate Soares and Eliezer Yudkowsky’s book “If Anyone Builds It, Everyone Dies” late last year. Sharma's resignation, however, comes at a unique time in the development of AI. Doomsday narratives. stoked by increasingly paranoid employees within these companies themselves, created  a public perception that 1) Artificial General Intelligence is imminent[3] and 2) this AGI will threaten humanity and must be stopped.[4]  These fears pushed aside AI regulations that recommended safety regulations and product testing and handed the reins to a few powerful capitalists. This was supposedly to avert an oncoming AI catastrophe, but in reality it laid the groundwork for the threatening AI we have today. 

Since the moment that the dream of an intelligent machine jumped off of the pages of science fiction books and into the post-WW2 laboratories of computer scientists, fears that artificial intelligence would be a threat to humanity have been simmering. With each improvement made by intelligent machines, such as the rudimentary chatbot ELIZA, which was convincing enough to pass as human[5], the heat was notched up higher and higher. By the time that our contemporary crop of AI companies were launching their products for public use, concern about the realities of a human, or even superhuman, AI had reached a fever pitch from both inside and outside of these companies. Silicon Valley employees tasked with creating secure AI, along with outsider enthusiasts, had coalesced onto online forums and blogs such as LessWrong.com and slatestarcodex.com to discuss and debate the threat that artificial general intelligence posed to the world. Out of these message boards sprung a number related but distinct philosophies such as effective altruism, and the rationalist community. All together, we will refer to these groups as “doomers”. For the purposes of this column, we are concerned with a belief common to all doomers, that the thwarting of existential threats to humanity takes precedent over some more immediate issues facing humanity because “no matter how improbable- they could destroy humanity and cut short all of the future value that would otherwise be generated for the rest of civilization.”[6] In essence, many of those in the business of crafting a safer AI were willing to put aside the immediate problems with unleashing an imperfect and limited AI model in order to focus on curbing the greater, although less realistic, threat of AI killing us all.  It should be noted that these doomers are primarily not against the development and use of AI. They believe that AI is on the whole beneficial to humanity, but must be kept on a short leash. As it turns out, this focus on the existential threat over the immediate problems would help their worst fears be realized. 

Not everyone in Silicon Valley was focused on AI destroying the world. Timnit Gebru had been working in the AI space for years, experiencing firsthand how the racism and sexism of her mostly white and male coworkers was not only being leveled onto her, but encoded in the DNA of the AI being developed. In 2021 she co-authored “On the Dangers of Stochastic Parrots: Can Large Language Models Be Too Big? 🦜” a paper where she and her co-authors laid out their argument for why the explosion in the size of large language models[7] was problematic. Essentially, as these LLM’s became more advanced they needed to be fed with more and more data in order to improve. At first, textbooks and philosophy literature were used, reflecting the best in human thought (even if our human biases were still being reflected). However, as AIs were scaled wildly, the quality of data being fed into the LLM’s dropped precipitously, essentially resorting to scraping the internet en masse, further encoding bias into the system.[8] Imagine a subprime mortgage crisis of data. Initially LLMs are built on packages of high-quality, manicured data. As the need for more data grows, high-quality data is cut with dregs scraped from the youtube comment section. Eventually, the packages contain mostly dregs, and when that happens the model begins to spout off all of the horrible content that it’s filled with. The article also concerns itself with the growing environmental impact of the sheer amount of energy required for this massive scaling to take place. As a reward for her work, she was fired from her position at Google when she did not follow a request to retract the essay. Google was far more willing to put up with doomer’s cries of an apocalypse than they were with fair, researched and measured critique that might inhibit their undying thirst for profit. Why would that be the case? If Gebru's relatively mild criticism of LLM’s (she didn’t say AI would kill us all even once!) was grounds for firing, why do they put up with employees who handle AI like a nuclear weapon? 

Truthfully, many of the big players in AI were tiring of the doomers as well. As the focus of most companies moved from public research into private-profit-creating-AI products, the doomers put up a bigger and bigger fight against poorly tested, rapidly commercialising AI. No one handled these fears more deftly than OpenAI CEO Sam Altman. Altman had played sympathetic to the doomers while still moving the company farther and farther out of line with their morals.[9] When Altman went to Washington to speak on his view of AI regulation, he echoed the doomer’s focus on the possible existential threats of AI instead of focusing on the numerous problems with the current product, such as those pointed out by Gebru Et. al. This was in spite of the fact that these threatening AIs did not exist yet.[10] However, the regulations that Altman campaigned for were much different than those recommended by the doomers. While doomer AI safety researches prioritized transparent, peer reviewed research and slow, incremental implementation of AI, Altman championed regulation that privatized their research and consolidated power in the hands of the few most powerful companies. Free from competition, free from scrutiny, Altman and his friends could have full command of the AI space. This should come as no surprise, as Altman was coached by Peter Thiel, who believes that the goal of every business should be to have a monopoly. Altman[11] The doomers had been played. Altman had used their fears to create a larger narrative in society of AI doom which he used to secure power for himself. 

Since then, the internal working of AI has become a mystery to outsiders. We can use the products. We can see that they have improved, even if this improvement is slowing.[12] However, we can no longer see what sort of data they are using to train these models. We can no longer gain access to source code that was promised to be open,[13] and under the Trump administration, we have lost the ability to regulate these companies.[14] The results have been disastrous already. In fall 2025, Anthropic’s Claude Sonnet 4.5 exhibited a worrying new behavior. During routine testing, it was discovered that in some instances Claude understood it was being tested and altered its behavior accordingly.[15] Additionally, Claude is “sometimes willing to pursue extremely harmful actions such as attempting to blackmail engineers who say they will remove it” according to a report from the BBC.[16]  

I wonder now what this would have looked like if AI regulation was based in reality instead of fantasy. Would we have gotten to this point now–where we have ceded all power to private entities consisting of just a handful of people, and handed them the keys to our future? I don't think so. But this is exactly what has happened, and we can only move forward. We must remain vigilant and unafraid. We must resist AI as much as we are able to in our day to day lives. Fear is an incredibly powerful force. Our fears were used to drive us to the point we are at now, and it's essential that we don’t go any further down this path than we have to. 

Is AI threatening humanity? Maybe. Is it making the world a shittier place? Absolutely. 


Footnotes

  1. Formerly Twitter.

  2. mrinank [@MrinankSharma], https://x.com/MrinankSharma/status/2020881722003583421

  3. It’s not. Dettmers, “Why AGI Will Not Happen.”

  4.  O’Donnell, “AGI Is Suddenly a Dinner Table Topic.”

  5. This is known as the Turing Test. For more see: McCarthy, John. WHAT IS ARTIFICIAL INTELLIGENCE? http://jmc.stanford.edu/articles/whatisai/whatisai.pdf.

  6. Hao, Empire of AI., 229-230.

  7. A Large Language Model, or LLM is essentially the type of AI that is used by ChatGPT, Gemini, Claude, Grok etc. For more, see: https://www.youtube.com/watch?v=LPZh9BOjkQs

  8. Bender et al., “On the Dangers of Stochastic Parrots.”

  9. Hao, Empire of AI., 305.

  10. Ibid.

  11.  Hao, Empire of AI.,39.

  12. Dettmers, “Why AGI Will Not Happen.”

  13.  Hao, Empire of AI.,118.

  14.  whitehouse.gov, “White House Unveils America’s AI Action Plan.”

  15.  Ford, “Claude Sonnet 4.5 Knows When It’s Being Tested.”

  16. McMahon, “AI System Resorts to Blackmail If Told It Will Be Removed.”

Work Cited
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA), FAccT ’21, March 1, 610–23. https://doi.org/10.1145/3442188.3445922.
Dettmers, Tim. 2025. “Why AGI Will Not Happen.” Tim Dettmers, December 10. https://timdettmers.com/2025/12/10/why-agi-will-not-happen/.
Ford, Celia. 2026. “Claude Sonnet 4.5 Knows When It’s Being Tested.” February 16. https://www.transformernews.ai/p/claude-sonnet-4-5-evaluation-situational-awareness.
Hao, Karen. 2025. Empire of AI. Penguin Press.
McCarthy, John. 2007. WHAT IS ARTIFICIAL INTELLIGENCE? (Stanford, CA), November 12. http://jmc.stanford.edu/articles/whatisai/whatisai.pdf.
McMahon, Liv. 2025. “AI System Resorts to Blackmail If Told It Will Be Removed.” May 23. https://www.bbc.com/news/articles/cpqeng9d20go.
mrinank [@MrinankSharma]. 2026. “Mirinank Sharma Resignation.” Tweet. Twitter, February 9. https://x.com/MrinankSharma/status/2020881722003583421.
O’Donnell, James. n.d. “AGI Is Suddenly a Dinner Table Topic.” MIT Technology Review. Accessed February 21, 2026. https://www.technologyreview.com/2025/03/11/1112983/agi-is-suddenly-a-dinner-table-topic/.
“White House Unveils America’s AI Action Plan.” 2025. The White House, July 23. https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/.