Learn More
This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function(More)
As deep neural networks grow larger, they suffer from a huge number of weights, and thus reducing the overhead of handling those weights becomes one of key challenges nowadays. This paper presents a new approach to binarizing neural networks, where the weights are pruned and forced to take degenerate binary values. Experimental results show that the(More)
Stochastic computing has been adopted in various fields to improve the power efficiency of systems. Recent work showed that DNN based on stochastic computing can greatly reduce the power consumption. However, stochastic computing has a limitation of high latency overhead as it computes values only one bit per cycle. This paper proposes a new scheme to(More)
  • 1