Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
@article{Nguyen2020DoWA, title={Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth}, author={Thao Nguyen and M. Raghu and Simon Kornblith}, journal={ArXiv}, year={2020}, volume={abs/2010.15327} }
A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model… CONTINUE READING
Figures and Tables from this paper
References
SHOWING 1-10 OF 47 REFERENCES
The Expressive Power of Neural Networks: A View from the Width
- Computer Science, Mathematics
- NIPS
- 2017
- 311
- PDF
Insights on representational similarity in neural networks with canonical correlation
- Computer Science, Mathematics
- NeurIPS
- 2018
- 111
- PDF
Comparison Against Task Driven Artificial Neural Networks Reveals Functional Properties in Mouse Visual Cortex
- Computer Science
- NeurIPS
- 2019
- 4
- PDF
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- Computer Science, Mathematics
- ICML
- 2019
- 1,354
- PDF
The effect of task and training on intermediate representations in convolutional neural networks revealed with modified RV similarity analysis
- Computer Science, Mathematics
- ArXiv
- 2019
- 1
- PDF
Deep Residual Learning for Image Recognition
- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
- 57,870
- Highly Influential
- PDF