Computational Cameras: Convergence of
Optics and Processing
Changyin Zhou, Student Member, IEEE, and Shree K. Nayar, Member, IEEE
Abstract—A computational camera uses a combination of optics
and processing to produce images that cannot be captured with
traditional cameras. In the last decade, computational imaging has
emerged as a vibrant field of research. A wide variety of computational
cameras has been demonstrated to encode more useful visual
information in the captured images, as compared with conventional
cameras. In this paper, we survey computational cameras
from two perspectives. First, we present a taxonomy of computational
camera designs according to the coding approaches, including
object side coding, pupil plane coding, sensor side coding,
illumination coding, camera arrays and clusters, and unconventional
imaging systems. Second, we use the abstract notion of light
field representation as a general tool to describe computational
camera designs, where each camera can be formulated as a projection
of a high-dimensional light field to a 2-D image sensor. We
show how individual optical devices transform light fields and use
these transforms to illustrate how different computational camera
designs (collections of optical devices) capture and encode useful
visual information.
Index Terms—Computer vision, imaging, image processing,
optics.
http://www1.cs.columbia.edu/CAVE/publications/pdfs/Zhou_TIP11.pdf
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment