Developmental Network Two, Its Optimality, and Emergent Turing Machines

@article{Weng2022DevelopmentalNT,
  title={Developmental Network Two, Its Optimality, and Emergent Turing Machines},
  author={Juyang Weng and Zejia Zheng and Xiang Wu},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.06279}
}
OF THE DISCLOSURE This invention includes a new type of neural network that is able to automatically and incrementally generate an internal hierarchy without a need to handcraft a static hierarchy of network areas and a static number of levels and the static number of neurons in each network area or level. This capability is achieved by enabling each neuron to have its own dynamic inhibitory zone using neuron-specific inhibitory connections. 

Figures and Tables from this paper

An Algorithmic Theory for Conscious Learning

The analysis here establishes that autonomous imitations as presented are a general mechanism in learning universal Turing machines and drastically reduce the teaching complexity compared to pre-collected “big data”, especially because no annotations of training data are needed.

Neural Networks Post-Selections in AI and How to Avoid Them

A new AI metrics, called developmental errors for all networks trained, under Three Learning Conditions are proposed: an incremental learning architecture (due to a “big data” flaw), a training experience and a limited amount of computational resources.

Post Selections Using Test Sets (PSUTS) and How Developmental Networks Avoid Them

A new standard for performance evaluation of AI is proposed, called developmental errors for all networks trained, along with Three Learning Conditions: an incremental learning architecture, a training experience and a limited amount of computational resources.

Post-Selections in AI and How to Avoid Them

A new AI metrics, called developmental errors for all networks trained, under Three Learning Conditions: an incremental learning architecture (due to a “big data” flaw), a training experience and a limited amount of computational resources.

3D-to-2D-to-3D Conscious Learning

  • J. Weng
  • Computer Science
    2022 IEEE International Conference on Consumer Electronics (ICCE)
  • 2022
This conscious learning is 3D-to-2D- to-3D (end-To-end) without motor-impositions or computing “inverse kinematics”, a major departure from traditional AI-handcrafting symbolic labels that tend to be brittle.

Fast Developmental Stereo-Disparity Detectors

This work presents two novel mechanisms to deal with degeneracies: volume dimension and subwindow voting in developmental stereo-disparity detection and its application on a Sony G8142 mobile phone.

20 Million-Dollar Problems for Any Brain Models and a Holistic Solution: Conscious Learning

  • J. Weng
  • Computer Science
    2022 International Joint Conference on Neural Networks (IJCNN)
  • 2022
The paper discusses also why the proposed holistic solution of conscious learning solves each of the 20 open problems.

A Developmental Method that Computes Optimal Networks without Post-Selections

  • J. Weng
  • Computer Science
    2021 IEEE International Conference on Development and Learning (ICDL)
  • 2021
A Developmental Methodology that trains only a single but optimal network for each application lifetime using a new standard for performance evaluation in machine learning, called developmental errors for all networks trained in a project that the selection of the luckiest network depends on, along with Three Learning Conditions.

References

SHOWING 1-10 OF 48 REFERENCES

Brain as an Emergent Finite Automaton: A Theory and Three Theorems

An overview of the FA-in-DN brain theory is given and the three major theorems and their proofs are presented.

Dually Optimal Neuronal Layers: Lobe Component Analysis

  • J. WengM. Luciw
  • Computer Science
    IEEE Transactions on Autonomous Mental Development
  • 2009
It is argued that in-place learning algorithms will be crucial for real-world large-size developmental applications due to their simplicity, low computational complexity, and generality.

Novelty estimation in developmental networks: Acetylcholine and norepinephrine

This work model how a Developmental Networks uses Ach and NE to allow neurons to collectively decide acceptance or rejection by estimated novelty based on past experience, instead of using a single threshold value.

Where What Network 3 : Developmental Top-Down Attention with Multiple Meaningful Foregrounds

The Where-What Network (WWN-3) is an artificial developmental network modeled after visual cortical pathways, for the purpose of attention and recognition in the presence of complex natural

Deep learning in neural networks: An overview

Where-What Network 5: Dealing with scales for objects in complex backgrounds

This paper focuses on Where-What Network-5 (WWN-5), the extension for multiple scales, which can learn three concepts of an object: type, location and scale.

Distributed hierarchical processing in the primate cerebral cortex.

A summary of the layout of cortical areas associated with vision and with other modalities, a computerized database for storing and representing large amounts of information on connectivity patterns, and the application of these data to the analysis of hierarchical organization of the cerebral cortex are reported on.

Where-what network-4: The effect of multiple internal areas

  • M. LuciwJ. Weng
  • Computer Science
    2010 IEEE 9th International Conference on Development and Learning
  • 2010
It is shown how local detectors reduce the number of neurons exponentially and deal with the complex background problem in developmental general AR, in context of the latest version of the biologically-inspired developmental Where-What Network.

Where-what network 1: “Where” and “what” assist each other through top-down connections

The results of the experiments showed how one type of information assists the network to suppress irrelevant information from background or irrelevant object information so as to give the required missing information in the motor output.

Skull-closed Autonomous Development : WWN-7 Dealing with Scales

This paper focused on the “skull-closed” WWN-7 in dealing with different object scales, and shows how the motor initiated expectations through top-down connections as temporal context assist the perception in a continuously changing physical world, with which the network interacts.