Image upsampling via texture hallucination


Image upsampling is a common yet challenging task, since it is severely underconstrained. While considerable progress was made in preserving the sharpness of salient edges, current methods fail to reproduce the fine detail typically present in the textured regions bounded by these edges, resulting in unrealistic appearance. In this paper we address this fundamental shortcoming by integrating higher-level image analysis and custom low-level image synthesis. Our approach extends and refines the patch-based image model of Freeman et al. [10] and interprets the image as a tiling of distinct textures, each of which is matched to an example in a database of relevant textures. The matching is not done at the patch level, but rather collectively, over entire segments. Following this model fitting stage, which requires some user guidance, a higher-resolution image is synthesized using a hybrid approach that incorporates principles from example-based texture synthesis. We show that for images that comply with our model, our method is able to reintroduce consistent fine-scale detail, resulting in enhanced appearance textured regions.

6 Figures and Tables

Citations per Year

71 Citations

Semantic Scholar estimates that this publication has 71 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{HaCohen2010ImageUV, title={Image upsampling via texture hallucination}, author={Yoav HaCohen and Raanan Fattal and Dani Lischinski}, journal={2010 IEEE International Conference on Computational Photography (ICCP)}, year={2010}, pages={1-8} }