Learn More
The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of(More)
Although there has been much interest in computational photography within the research and photography communities, progress has been hampered by the lack of a portable, programmable camera with sufficient image quality and computing power. To address this problem, we have designed and implemented an open architecture and API for such cameras: the(More)
In this paper we investigate the problem of recovering the motion blur point spread function (PSF) by fusing the information available in two differently exposed image frames of the same scene. The proposed method exploits the difference between the degradations which affect the two images due to their different exposure times. One of the images is mainly(More)
All-in-focus imaging is a computational photography technique that produces images free of defocus blur by capturing a stack of images focused at different distances and merging them into a single sharp result. Current approaches assume that images have been captured offline, and that a reasonably powerful computer is available to process them. In contrast,(More)
Figure 1: Uniformly sampling the space of exposure times until every pixel is correctly recorded at least once (i.e., it is not always clipped) can result in an unnecessarily large image stack with sub-optimal Signal-to-Noise ratio. For the scene shown in the tonemapped image on the left, this results in a 5-image stack. Our method determines that for this(More)