The Singularity May Be Near

  title={The Singularity May Be Near},
  author={Roman V Yampolskiy},
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the “likely to happen” prediction. 

Pareidolic and Uncomplex Technological Singularity

A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware‒software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective.

Open Universities in the Future with Technological Singularity Integrated Social Media

This research tried to determine how mega open universities can benefit from social media in their preparation processes for technological singularity and concluded that the social media platforms will have a high level of interpersonal communication and interaction and that the institutions will play a role in determining the limits of their place in human life.

Intelligence in cyberspace: the road to cyber singularity

An extensive survey of the past research works related to the field is performed and the concepts of set theory are used to reinforce the possibility of Cyber Singularity in the coming years.

Transhumanism as a Derailed Anthropology

  • K. Kornwachs
  • Philosophy
    Transhumanism: The Proper Guide to a Posthuman Condition or a Dangerous Idea?
  • 2020
According to some proponents, artificial intelligence seems to be a presupposition for machine autonomy, wheras autonomy and conscious machines are the presupposition for singularity (Cf. Logan,

Human $\neq$ AGI.

This paper proves that humans are not general intelligences, and widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified.

Unexplainability and Incomprehensibility of Artificial Intelligence

This paper describes two complementary impossibility results (Unexplainability and Incomprehensibility) showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.

AGI Safety Literature Review

The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety, and to review the current public policy on AGI.

Unexplainability and Incomprehensibility of AI

This research presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging and cataloging the actions taken by artificial intelligence systems to solve real-world problems.

The Reflections of Technological Singularity on Open and Distance Learning Management

The technological singularity within the context of super-human and Human 2.0 concepts regarding the definition of “new human” that this phenomenon will shape, its reflections on education, and especially open and distance learning, namely open universities, and how these systems will transform are discussed.

Towards Singularity: Implications to Intelligent UI with Explainable AI approach to HCI

The convergence of Engineering and Life sciences with its association to improving the outlines of future UIs on a process of gaining collective intelligence is explained.



The Singularity: a Philosophical Analysis

What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines

The Singularity May Never Be Near

There is both much optimism and pessimism around artificial intelligence (AI) today, and it is therefore very worthwhile spending some time deciding if either of them might be right.

Singularity Hypotheses: A Scientific and Philosophical Assessment

Singularity Hypotheses: A Scientific and Philosophical Assessment offers authoritative, jargon-free essays and critical commentaries on accelerating technological progress and the notion of

Ultimate physical limits to computation

The physical limits of computation as determined by the speed of light c, the quantum scale ℏ and the gravitational constant G are explored.

Universal Limits on Computation

It is demonstrated here that the observed acceleration of the Universe can produce a universal limit on the total amount of information that can be stored and processed in the future, putting an ultimate limit on future technology for any civilization, including a time-limit on Moore's Law.

Towards a Theory of AI Completeness

  • Dafna ShahafEyal Amir
  • Computer Science, Mathematics
    AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning
  • 2007
This work serves as a formal basis for investigation of problems that researchers treat as hard AI problems and allows progress in AI as a field to be more measurable, instead of measurable with respect to problem-specific quantities.

An Introduction to Gödel's Theorems

An unusual variety of proofs for the First Theorem are presented, how to prove the Second Theorem is shown, and a family of related results are explored, including some not easily available elsewhere.

Why an Intelligence Explosion is Probable

The hypothesis is considered that an “intelligence explosion” involving the relatively rapid creation of increasingly more generally intelligent AI systems will very likely ensue, resulting in the rapid emergence of dramatically superhuman intelligences.

Superintelligence: Paths, Dangers, Strategies

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles

Intelligence Explosion: Evidence and Import

This chapter reviews the evidence for and against claims that there is a substantial chance the authors will create human-level AI before 2100, and recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.