Learning Low-Level Vision

Abstract

We describe a learning-based method for low-level vision problems—estimating scenes from images. We generate a synthetic world of scenes and their corresponding rendered images, modeling their relationships with a Markov network. Bayesian belief propagation allows us to efficiently find a local maximum of the posterior probability for the scene, given an image. We call this approach VISTA—Vision by Image/Scene TrAining. We apply VISTA to the “super-resolution” problem (estimating high frequency details from a low-resolution image), showing good results. To illustrate the potential breadth of the technique, we also apply it in two other problem domains, both simplified. We learn to distinguish shading from reflectance variations in a single image under particular lighting conditions. For the motion estimation problem in a “blobs world”, we show figure/ground discrimination, solution of the aperture problem, and filling-in arising from application of the same probabilistic machinery.

DOI: 10.1023/A:1026501619075

Extracted Key Phrases

29 Figures and Tables

Showing 1-10 of 49 references

Learning in Graphical Models

  • M I Jordan
  • 1998
Highly Influential
7 Excerpts

Neural Networks for Pattern Recognition

  • C M Bishop
  • 1995
Highly Influential
3 Excerpts

Learning motion analysis

  • W T Freeman, J A Haddon, E C Pasztor
  • 2001
Showing 1-10 of 771 extracted citations
050100'01'03'05'07'09'11'13'15'17
Citations per Year

1,375 Citations

Semantic Scholar estimates that this publication has received between 1,215 and 1,558 citations based on the available data.

See our FAQ for additional information.