Corpus ID: 7137628

INsight: A Neuromorphic Computing System for Evaluation of Large Neural Networks

@article{Chung2015INsightAN,
  title={INsight: A Neuromorphic Computing System for Evaluation of Large Neural Networks},
  author={Jaeyong Chung and T. Shin and Yongshin Kang},
  journal={ArXiv},
  year={2015},
  volume={abs/1508.01008}
}
Deep neural networks have been demonstrated impressive results in various cognitive tasks such as object detection and image classification. In order to execute large networks, Von Neumann computers store the large number of weight parameters in external memories, and processing elements are timed-shared, which leads to power-hungry I/O operations and processing bottlenecks. This paper describes a neuromorphic computing system that is designed from the ground up for the energy-efficient… Expand
Simplifying Deep Neural Networks for FPGA-Like Neuromorphic Systems
TLDR
This paper presents two techniques, factorization and pruning, that not only compress the models but also maintain the form of the models for the execution on neuromorphic architectures and proposes a novel method to combine the two techniques. Expand
Synthesis of activation-parallel convolution structures for neuromorphic architectures
  • S. Kim, Jaeyong Chung
  • Computer Science
  • Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017
  • 2017
TLDR
An unrolling method that generates parallel structures for the convolutional layers depending on a required level of parallel processing is presented and can improve the performance or reduce the power consumption significantly even without area penalty. Expand
Simplifying deep neural networks for neuromorphic architectures
  • Jaeyong Chung, T. Shin
  • Computer Science
  • 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC)
  • 2016
TLDR
This paper presents two techniques, factorization and pruning, that not only compress the models but also maintain the form of the models for the execution on neuromorphic architectures and proposes a novel method to combine the two techniques. Expand
Live demonstration: Real-time image classification on a neuromorphic computing system with zero off-chip memory access
This demo shows a neuromorphic computing system called INsight that classifies images fed by an OV7670 camera in real-time into 10 categories such as dogs, cats, trucks, etc. A 6-layer deepExpand
An Improved K-Spare Decomposing Algorithm for Mapping Neural Networks onto Crossbar-Based Neuromorphic Computing Systems
TLDR
An improved version of the K-spare neuron method that uses a decomposition algorithm to minimize the neuron number overhead while maintaining the accuracy of the DNN model is proposed by using a mean squared quantization error (MSQE) to evaluate which crossbar units are more important and use more scaling factor than others. Expand
A Neural Network Decomposition Algorithm for Mapping on Crossbar-Based Computing Systems
TLDR
The k-spare decomposition algorithm is proposed that can trade off the predictive performance against the neuron usage during the mapping to improve the accuracy of the partially mapped network in the subsequent local decomposition. Expand
A dynamic fixed-point representation for neuromorphic computing systems
TLDR
This paper explores a design space for dynamic fixed-point neuromorphic computing systems and shows that it is indispensable to have a small group size in neuromorphic architectures, because it is appropriate to group the weights associated with a neuron into a group. Expand
Recent trends in neuromorphic engineering
TLDR
A review of recent trends in neuromorphic engineering and its sub-domains is looked at, with an attempt to identify key research directions that would assume significance in the future. Expand
Designing Efficient Shortcut Architecture for Improving the Accuracy of Fully Quantized Neural Networks Accelerator
TLDR
An efficient shortcut architecture to enhance the representational capability of DNN between different convolution layers is proposed and the shortcut hardware architecture is implemented to effectively improve the accuracy of fully quantized neural networks accelerator. Expand

References

SHOWING 1-10 OF 32 REFERENCES
Building block of a programmable neuromorphic substrate: A digital neurosynaptic core
TLDR
A building block of a modular neuromorphic architecture, a neurosynaptic core that is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. Expand
A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons
TLDR
A new architecture is proposed to overcome scalable learning algorithms for networks of spiking neurons in silicon by combining innovations in computation, memory, and communication to leverage robust digital neuron circuits and novel transposable SRAM arrays. Expand
A million spiking-neuron integrated circuit with a scalable communication network and interface
TLDR
Inspired by the brain’s structure, an efficient, scalable, and flexible non–von Neumann architecture is developed that leverages contemporary silicon technology and is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. Expand
Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores
TLDR
A set of abstractions, algorithms, and applications that are natively efficient for TrueNorth, a non-von Neumann architecture inspired by the brain's function and efficiency, and seven applications that include speaker recognition, music composer recognition, digit recognition, sequence prediction, collision avoidance, optical flow, and eye detection are developed. Expand
A wafer-scale neuromorphic hardware system for large-scale neural modeling
TLDR
An integrated software/hardware framework has been developed which is centered around a unified neural system description language, called PyNN, that allows the scientist to describe a model and execute it in a transparent fashion on either a neuromorphic hardware system or a numerical simulator. Expand
Real-Time Scalable Cortical Computing at 46 Giga-Synaptic OPS/Watt with ~100× Speedup in Time-to-Solution and ~100,000× Reduction in Energy-to-Solution
TLDR
True North is a 4,096 core, 1 million neuron, and 256 million synapse brain-inspired neurosynaptic processor, that consumes 65mW of power running at real-time and delivers performance of 46 Giga-Synaptic OPS/Watt. Expand
Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores
TLDR
A simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates is developed. Expand
Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations
TLDR
The design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time, is described-for the first time-using 16 Neurocores integrated on a board that consumes three watts. Expand
A high-performance FPGA architecture for restricted boltzmann machines
TLDR
This paper investigates how FPGAs can be used to take advantage of the inherent parallelism in neural networks to provide a better implementation in terms of scalability and performance, and focuses on the Restricted Boltzmann machine, a popular type of neural network. Expand
A dynamically configurable coprocessor for convolutional neural networks
TLDR
This is the first CNN architecture to achieve real-time video stream processing (25 to 30 frames per second) on a wide range of object detection and recognition tasks. Expand
...
1
2
3
4
...