AI's negative perception
The growing public concern and negative sentiment towards artificial intelligence, fueled by fears of job displacement, environmental impact, and wealth inequality.
First Mentioned
12/20/2025, 4:59:17 AM
Last Updated
12/20/2025, 4:59:48 AM
Research Retrieved
12/20/2025, 4:59:48 AM
Summary
The negative perception of Artificial Intelligence (AI) is a multifaceted issue driven by public anxiety, political scrutiny, and ethical concerns. While the technology has seen rapid advancement since the 2020s 'AI boom,' it faces significant pushback regarding job displacement, existential risks, and the erosion of human cognitive skills. High-profile political figures like Bernie Sanders have advocated for moratoriums on AI infrastructure, such as data centers, reflecting broader societal apprehension. This perception is further complicated by the 'AI Race' with China, where proponents of acceleration clash with those highlighting the valid fears of the average citizen. Critics like David Sacks suggest that a 'doomer industrial complex'—funded by figures such as Vitalik Buterin and Dustin Moskovitz—may be intentionally shaping negative narratives. Meanwhile, academic research indicates that AI usage can negatively impact professional reputation, as seen in medical peer reviews, and that a majority of Americans believe AI will worsen human creativity and social relationships. To counter this, industry leaders like Chamath Palihapitiya argue that the tech sector must earn a 'social license to operate' by demonstrating clear, broad-based societal benefits.
Referenced in 1 Document
Research Data
Extracted Attributes
Narrative Theory
Existence of a 'doomer industrial complex' or 'anti-AI astroturfing' allegedly shaping public fear.
Professional Stigma
Doctors using AI are often viewed as less competent by their peers compared to those who do not.
Political Resistance
Calls for AI data center moratoriums by figures like Bernie Sanders.
Public Sentiment (US)
53% of adults believe AI will worsen creativity; 50% believe it will worsen meaningful relationships.
Primary Drivers of Negativity
Job displacement, existential risk, loss of creativity, and erosion of meaningful human relationships.
Proposed Industry Requirement
Obtaining a 'social license to operate' through tangible philanthropic or societal contributions.
Timeline
- Artificial intelligence is founded as an academic discipline, beginning cycles of optimism and 'AI winters'. (Source: Wikipedia)
1956-01-01
- Interest and funding surge as GPUs accelerate neural networks and deep learning. (Source: Wikipedia)
2012-01-01
- The introduction of the transformer architecture further accelerates AI growth. (Source: Wikipedia)
2017-01-01
- The 'AI boom' begins with rapid progress in generative AI, leading to increased ethical and existential concerns. (Source: Wikipedia)
2020-01-01
- Pew Research reports that U.S. adults are generally pessimistic about AI's effect on creativity and relationships. (Source: Web Search (Pew Research))
2025-09-17
- Johns Hopkins study finds that doctors who use AI are viewed more negatively by their peers. (Source: Web Search (JHU Hub))
2025-10-27
Wikipedia
View on WikipediaArtificial intelligence
Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., language models and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include learning, reasoning, knowledge representation, planning, natural language processing, perception, and support for robotics. To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. Some companies, such as OpenAI, Google DeepMind and Meta, aim to create artificial general intelligence (AGI) – AI that can complete virtually any cognitive task at least as well as a human. Artificial intelligence was founded as an academic discipline in 1956, and the field went through multiple cycles of optimism throughout its history, followed by periods of disappointment and loss of funding, known as AI winters. Funding and interest vastly increased after 2012 when graphics processing units started being used to accelerate neural networks, and deep learning outperformed previous AI techniques. This growth accelerated further after 2017 with the transformer architecture. In the 2020s, an ongoing period of rapid progress in advanced generative AI became known as the AI boom. Generative AI's ability to create and modify content has led to several unintended consequences and harms. Ethical concerns have been raised about AI's long-term effects and potential existential risks, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
Web Search Results
- Doctors who use AI viewed negatively by their peers, study ...
According to the study, peer perception suffers for doctors who rely on AI. Framing generative AI as a "second opinion" or a verification tool partially improved negative perceptions from peers, but it did not fully eliminate them. Not using GenAI, however, resulted in the most favorable peer perceptions. The findings align with theories that suggest perceived dependence on an external source like AI can be seen as a weakness by clinicians. [...] "In the age of AI, human psychology remains the ultimate variable," says Haiyang Yang, first author of the study and academic program director of the Masters of Science in Management program at the Carey Business School. "The way people perceive AI use can matter just as much as, or even more than, the performance of the technology itself." ### Skipping AI equaled more respect [...] Doctors who use artificial intelligence at work risk having their colleagues deem them less competent for it, according to a recent Johns Hopkins University study.
- AI—The good, the bad, and the scary
"We are already facing the negative outcomes of AI. For example, take recommendation algorithms for streaming services: the types of shows you see are influenced by the shows recommended to you by an artificial agent. More generally, today's AI systems influence human decision making at multiple levels: from viewing habits to purchasing decisions, from political opinions to social values. To say that the consequences of AI is a problem for future generations ignores the reality in front of us — [...] ### The Bad: Growing pains
- How Americans View AI and Its Impact on People and ...
U.S. adults are generally pessimistic about AI’s effect on people’s ability to think creatively and form meaningful relationships: 53% say AI will _worsen_ people’s ability to think creatively, compared with 16% who say it will _improve_ this. An identical share (16%) says AI will make this skill neither better nor worse. [...] Notably, sizable shares of U.S. adults are uncertain about these questions. Between 16% and 20% say they aren’t sure about whether AI will have a positive or negative impact on these human skills. [...] Image 2: Chart shows About half say AI will worsen people’s ability to think creatively and form meaningful relationships Far more say AI will _worsen_ rather than _improve_ people’s ability to form meaningful relationships (50% vs. 5%). One-quarter say AI won’t make this better or worse. Americans are relatively more optimistic about AI improving problem-solving: 29% of U.S. adults say it will make people _better_ at this skill. Still, a larger share (38%) says AI will make this _worse_.
- Is AI dulling our minds?
I think the key to having the owl be a positive force instead of a negative one is not to let it do your thinking for you. We know that generative AI doesn’t understand the human context, so it’s not going to provide wisdom about social, emotional, and contextual events, because those are not part of its repertoire. However, GenAI is very good at absorbing large amounts of data and making calculative predictions in ways that can augment your thinking. [...] The contrast for me is between doing things better and doing better things. Ninety-five percent of what I read about AI in education is that it can help us do things better, but we should also be doing better things. One of the traps of GenAI, even when you’re using it well, is that if you’re using it just to do the same old stuff better and quicker, you have a faster way of doing the wrong thing.
- 15 Risks and Dangers of Artificial Intelligence (AI)
Various forms of AI bias are detrimental too. Speaking to the New York Times, Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and humans are inherently biased. [...] Outside of the classroom, the impact of AI on everyday life is also beginning to show up. For example, brain rot, a term coined to describe the mental and emotional deterioration a person feels when spending excessive time online, is being exacerbated by generative AI. The nonstop stream of recommended and generated content can overwhelm individuals and distort their reality. [...] As AI tools become more integrated with daily life, concerns are growing about their long-term effects on our psychological health and mental abilities. The very features that make AI so powerful — automation, instant access to information and task optimization — also introduce risks when used without critical oversight.