
Superintelligence
A hypothetical future stage of AI development where an agent possesses intelligence far surpassing that of the most gifted human minds.
entitydetail.created_at
7/26/2025, 7:10:44 AM
entitydetail.last_updated
7/26/2025, 7:12:46 AM
entitydetail.research_retrieved
7/26/2025, 7:12:46 AM
Summary
Superintelligence is a scientific concept referring to a hypothetical agent or system possessing intelligence that vastly exceeds the brightest human minds across virtually all domains of interest, as defined by philosopher Nick Bostrom. Unlike specialized AI, a superintelligence would demonstrate superior cognitive performance in a broad range of tasks. Researchers hold diverse views on its emergence, with some anticipating it through advanced Artificial Intelligence (AI) leading to general reasoning systems, while others foresee human intelligence enhancement through biological means or human-computer interfaces. The concept is often linked to the development of Artificial General Intelligence (AGI), with predictions suggesting superintelligence could follow swiftly, leveraging advantages like perfect recall, extensive knowledge bases, and superior multitasking. Given its potential to become significantly more powerful than humans and reshape society, many experts, including figures like AMD CEO Lisa Su who ponders its trajectory, advocate for proactive research into the benefits and risks of such advanced cognitive enhancement.
Referenced in 1 Document
Research Data
Extracted Attributes
Field
Artificial Intelligence, Philosophy, Futures Studies
Nature
Hypothetical agent or property of advanced problem-solving systems
Definition
Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest
Associated Concepts
Artificial General Intelligence (AGI), Intelligence Explosion, Technological Singularity
Potential Characteristics
Perfect recall, vast knowledge base, superior multitasking abilities
Timeline
- Nick Bostrom publishes 'Superintelligence: Paths, Dangers, Strategies', a highly influential book exploring its creation, features, dangers, and control strategies. (Source: Web Search (Wikipedia: Superintelligence: Paths, Dangers, Strategies))
2014-01-01
- Superintelligence is believed by some researchers to emerge shortly after the development of Artificial General Intelligence (AGI). (Source: Wikipedia, DBpedia)
Future (speculative)
- Scientists and forecasters advocate for prioritizing research into the potential benefits and risks of human and machine cognitive enhancement due to profound societal impact. (Source: Wikipedia, DBpedia)
Ongoing
Wikipedia
View on WikipediaSuperintelligence
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of advanced problem-solving systems that excel in specific areas (e.g., superintelligent language translators or engineering assistants). Nevertheless, a general purpose superintelligence remains hypothetical and its creation may or may not be triggered by an intelligence explosion or a technological singularity. University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may allow them to — either as a single being or as a new species — become much more powerful than humans, and displace them. Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.
Web Search Results
- Superintelligence - Wikipedia
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds.( "Superintelligence" may also refer to a property of advanced problem-solving systems that excel in specific areas (e.g., superintelligent language translators or engineering assistants). Nevertheless, a general purpose superintelligence remains hypothetical and its creation may or may not be triggered by an intelligence explosion or a technological singularity. [...] University of Oxford philosopher Nick Bostrom defines _superintelligence_ as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".( The program Fritz "Fritz (chess)") falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks.( [...] Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may allow them to — either as a single being or as a new species —
- What Is Artificial Superintelligence? - IBM
is a hypothetical software-based artificial intelligence (AI) system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human. [...] Learning algorithms, inspired by how the human brain learns, enable AI to improve its performance over time. This continuous learning is crucial for achieving human-level intelligence, allowing AI to acquire knowledge and adapt to new situations without explicit programming. [...] match the human ability to learn and adapt to new situations.
- Superintelligence: Paths, Dangers, Strategies - Wikipedia
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about [...] ## Synopsis It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would most likely follow surprisingly quickly. Such a superintelligence would be very difficult to control. [...] (hypothetical material optimized for computation) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it is necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and
- Superintelligence 5: Forms of Superintelligence - LessWrong
So I think his definitions of 'superintelligence' are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He's also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort. [...] I'm confused about Bostrom's definition of superintelligence for collectives. The following quotes suggest that it is not the same as the usual definition of superintelligence (greatly outperforming a human in virtually all domains), but instead means something like 'greatly outperforming current collective intelligences', which have been improving for a long time: [...] This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide. Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
- Artificial Super Intelligence: Exploring the Frontier of AI - Viso Suite
While AGI would, at the very least, match human intelligence, Artificial superintelligence (ASI) represents a system that surpasses human intelligence. Compared to us, an ASI would exhibit virtually unfathomable cognitive abilities in all areas, including creativity, problem-solving, and decision-making. Both AGI and ASI should have the potential for emergence or development capabilities that have not been explicitly programmed. [...] Estimates for when we would achieve AGI range from the next 5 to 30 years. As such, most papers on the subject are highly speculative, focusing on the current state of AI. Nick Bostrom’s book, “Superintelligence: Paths, Dangers, Strategies,” is one of the most influential on the subject. It explores the most likely pathways to achieving AGI as well as its potential risks, economic impact, and concerns regarding ethics and morality. [...] However, super-intelligent AI may represent a higher form of thinking than the human mind is capable of. Because of this, it’s hard even fully to imagine what ultimate form this machine intelligence will take. Any speculation borders on science fiction, which is why it’s called a technological “singularity,” representing a point where all our existing knowledge ceases to help us extrapolate into the future. That being said, speculative examples of what ASI might be capable of are:
Wikidata
View on WikidataInstance Of
DBPedia
View on DBPediaA superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity. University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness). Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them. A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.
