Using an analytical approach to build a composite of multiple focal length images within a single shot by a digital camera is one of the latest research developments developed by members of the Computer Graphics Group in MIT's Computer Science and Artificial Intelligence Laboratory.
EECS faculty members Frédo Durand, associate professor of computer science and engineering, and William Freeman, professor of electrical engineering and computer science, with MIT postdoctoral associates Sam Hasinoff (and earlier group member Anat Levin), and Kiriakos Kutulakos of the University of Toronto, have presented this new work at the IEEE Conference on Computer Vision in Kyoto, Japan on Oct. 2. As reported by the MIT News Office (Sept. 30, 2009), the team members demonstrated at this conference that using a mathematical model to determine how many exposures will yield the sharpest image given a time limit, a focal distance and a light meter reading, today's digital cameras could benefit from this new adaptation which the group members have called a "lattice-focal lens." The device consists of an ordinary lens filter with what look like 12 tiny boxes of different heights clustered at its center. Each box is really a lens with a different focal length, which projects an image onto a different part of the camera's sensor. The same algorithm used to combine multiple exposures is also capable of recovering a regular photo from the raw image.
"Time-Constrained Photography," Samuel W. Hasinoff, Kiriakos N. Kutulakos, Frédo Durand, William T. Freeman.
"4D Frequency Analysis of Computational Cameras for Depth of Field Extension," Anat Levin, Samuel W. Hasinoff, Paul Green, Frédo Durand, William T. Freeman.