• Publications
  • Influence
DaDianNao: A Machine-Learning Supercomputer
TLDR
We introduce a custom multi-chip machine-learning architecture along those lines, containing a combination of custom storage and computational units, with industry-grade interconnects. Expand
ShiDianNao: Shifting vision processing closer to the sensor
TLDR
In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Expand
On the estimation of transfer functions, regularizations and Gaussian processes - Revisited
TLDR
We formulate a classical regularization approach, focused on finite impulse response (FIR) models, and find that regularization is necessary to cope with the high variance problem. Expand
Kernel methods in system identification, machine learning and function estimation: A survey
TLDR
Learning techniques tailored to the specific features of dynamic systems may outperform conventional parametric approaches for identification of stable linear systems. Expand
Cambricon: An Instruction Set Architecture for Neural Networks
TLDR
In this paper, we propose a novel domain-specific Instruction Set Architecture (ISA) for neural networks called Cambricon, which allows NN accelerators to flexibly support a broad range of different NN techniques. Expand
Cambricon-S: Addressing Irregularity in Sparse Neural Networks through A Cooperative Software/Hardware Approach
TLDR
We propose a software-based coarse-grained pruning technique to reduce the irregularity of sparse synapses drastically. Expand
Statistical Performance Comparisons of Computers
TLDR
We propose a non-parametric hierarchical performance testing framework for performance comparison, which is significantly more practical than standard i-statistics because it does not require to collect a large number of performance observations in order to achieve a normal distribution of sample mean. Expand
Regularized system identification using orthonormal basis functions
TLDR
In this paper, we extend the regularization method from impulse response estimation to the more general orthonormal basis functions estimation. Expand
Scaling Up Estimation of Distribution Algorithms for Continuous Optimization
TLDR
We propose a novel EDA framework with model complexity control (EDA-MCC) to scale up continuous EDAs for large-scale optimization. Expand
DaDianNao: A Neural Network Supercomputer
TLDR
We introduce a custom multi-chip machine-learning architecture along those lines, and evaluate performance by integrating electrical and optical inter-chip interconnects separately. Expand
...
1
2
3
4
5
...