• Corpus ID: 16293218

The Nature of Self-Improving Artificial Intelligence

  title={The Nature of Self-Improving Artificial Intelligence},
  author={Stephen M. Omohundro},
An electronic still camera indicates the available capacity of a recording medium and the number of frames remaining that may be photographed, as well as warning by stages in at least two forms, based on the available capacity of the recording medium. [] Key Result or she may photograph on the memory card.
Towards Safe Artificial General Intelligence
The central conclusion is that while reinforcement learning systems as designed today are inherently unsafe to scale to human levels of intelligence, there are ways to potentially address many of these issues without straying too far from the currently so successful reinforcement learning paradigm.
Risks of the Journey to the Singularity
This paper argues that humanity will create artificial general intelligence within the next twenty to one hundred years, and that individual AGIs would be capable of learning to operate in a wide variety of domains, including ones they had not been specifically designed for.
Semi-supervised Deep Continuous Learning
This research proves the potential for a CNNL system shown to be able to achieve impressive results with little tuning on standardized datasets, but the initialization is as low as 150 images.
Fully Autonomous AI
It is argued that a general AI may very well come to modify its final goal in the course of developing its understanding of the world, which has important implications for how to assess the long-term prospects and risks of artificial intelligence.
Intelligence in cyberspace: the road to cyber singularity
An extensive survey of the past research works related to the field is performed and the concepts of set theory are used to reinforce the possibility of Cyber Singularity in the coming years.
A Review of Fundamentals and Influential Factors of Artificial Intelligence
The drivers, advantages, disadvantages and challenges for the use of AI applications based on a literature search and historical developments, common definitions, types and functionalities of AI are presented.
Arms Control and Intelligence Explosions
Key considerations that distinguish the case of sapient software programs from the historical experience with nuclear weapons technology are discussed, suggesting that regulatory jurisdictions may find cooperative control of the development of software entities more desirable and more practically feasible than historical nuclear arms control efforts.
Dynamic Models Applied to Value Learning in Artificial Intelligence
It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that the authors cannot expect an AI to develop human moral values simply because of its intelligence, as discussed in the Orthogonality Thesis.
Rational Artificial Intelligence for the Greater Good
The modern theory of rational systems is summarized and it is shown that rational systems are subject to a variety of “drives” including self-protection, resource acquisition, replication, goal preservation, efficiency, and self-improvement.
Efficiency Theory : a Unifying Theory for Information, Computation and Intelligence
By defining such diverse terms as randomness, knowledge, intelligence and computability in terms of a common denominator the paper is able to bring together contributions from Shannon, Levin, Kolmogorov, Solomonoff, Chaitin, Yao and many others under a common umbrella of the efficiency theory.


Logical reversibility of computation
This result makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step.
The thermodynamics of computation—a review
Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic computers dissipate energy vastly in excess of the mean thermal
Reinforcement Learning: An Introduction
This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Temporal Difference Learning and TD-Gammon
  • G. Tesauro
  • Computer Science
    J. Int. Comput. Games Assoc.
  • 1995
The domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning.
Universal Artificial Intellegence - Sequential Decisions Based on Algorithmic Probability
  • Marcus Hutter
  • Education
    Texts in Theoretical Computer Science. An EATCS Series
  • 2005
Reading a book as this universal artificial intelligence sequential decisions based on algorithmic probability and other references can enrich your life quality.
Reinforcement Learning: A Survey
Central issues of reinforcement learning are discussed, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.
Vector quantization and signal compression
  • A. Gersho, R. Gray
  • Computer Science
    The Kluwer international series in engineering and computer science
  • 1991
The author explains the design and implementation of the Levinson-Durbin Algorithm, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing a Quantizer.
Artificial Intelligence: A Modern Approach
The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications.
Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements
The first class of mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers is presented, which not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()-notation, provided the utility of such speed-ups is provable at all.