• Publications
  • Influence
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Learning Efficient Convolutional Networks through Network Slimming
TLDR
The approach is called network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy.
Deep Networks with Stochastic Depth
TLDR
Stochastic depth is proposed, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time and reduces training time substantially and improves the test error significantly on almost all data sets that were used for evaluation.
Rethinking the Value of Network Pruning
TLDR
It is found that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization, and the need for more careful baseline evaluations in future research on structured pruning methods is suggested.
Snapshot Ensembles: Train 1, get M for free
TLDR
This paper proposes a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost by training a single neural network, converging to several local minima along its optimization path and saving the model parameters.
Multi-Scale Dense Networks for Resource Efficient Image Classification
TLDR
Experiments demonstrate that the proposed framework substantially improves the existing state-of-the-art in both image classification with computational resource limits at test time and budgeted batch classification.
Semi-Supervised and Unsupervised Extreme Learning Machines
TLDR
It is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework, which provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory.
CondenseNet: An Efficient DenseNet Using Learned Group Convolutions
TLDR
CondenseNet is developed, a novel network architecture with unprecedented efficiency that combines dense connectivity with a novel module called learned group convolution, allowing for efficient computation in practice.
Horizontal Pyramid Matching for Person Re-identification
TLDR
A simple yet effective Horizontal Pyramid Matching (HPM) approach to fully exploit various partial information of a given person, so that correct person candidates can be still identified even even some key parts are missing.
...
1
2
3
4
5
...