Feedback-prop: Convolutional Neural Network Inference under Partial Evidence

Abstract

In this paper, we propose an inference procedure for deep convolutional neural networks (CNNs) where partial evidence might be available during inference. We introduce a general feedback-based propagation approach (feedbackprop) that allows us to boost the prediction accuracy of an existing CNN model for an arbitrary set of unknown image labels when a non-overlapping arbitrary set of labels is known. We show that existing models trained in a multilabel or multi-task setting can readily take advantage of feedback-prop without any retraining or fine-tuning. This inference procedure also enables us to evaluate empirically various CNN architectures for the intermediate layers with the most information sharing with respect to target outputs. Our feedback-prop inference procedure is general, simple, reliable, and works on different challenging visual recognition tasks. We present two variants of feedback-prop based on layer-wise, and residual iterative updates. We peform evaluations in several tasks involving multiple simultaneous predictions and show that feedback-prop is effective in all of them. In summary, our experiments show a previously unreported and interesting dynamic property of deep CNNs, and presents a technical approach that takes advantage of this property for inference under partial evidence for general visual recognition tasks.

10 Figures and Tables

Cite this paper

@article{Wang2017FeedbackpropCN, title={Feedback-prop: Convolutional Neural Network Inference under Partial Evidence}, author={Tianlu Wang and Kota Yamaguchi and Vicente Ordonez}, journal={CoRR}, year={2017}, volume={abs/1710.08049} }