Monday, 4 January 2016

sensor - What are the basic workings of the Lytro light-field camera?


lytro.com describes their new light field camera as being able to capture the entire light field, rather than just one plane of light, thereby allowing for a whole new set of post-processing possibilities, including focus and perspective adjustment.


What sort of sensor could "capture every beam of light in every direction at every point in time"? How would an essentially infinite amount information be encoded and manipulated? Would there be a lens up front in any traditional sense?


Here is the inventor's dissertation: http://www.lytro.com/renng-thesis.pdf


Can somebody boil this down for those of us who are familiar with traditional technologies?



Answer



Here is my nutshell after reading through Ren Ng's very approachable paper.


In a traditional digital camera the incoming light is focused onto a plane, the sensor, which measures brightness at each photosensitive cell, pixel. This produces a final image in the sense that the resulting raster of values can be plotted as a coherent image.



A light-field (Plenoptic) camera uses the same kind of sensor, but places an array of microlenses in front of the sensor. This becomes the imaging plane, and defines the pixel resolution of post-processed images rather than the sensor as a whole. Each microlens captures light rays for the varying directions, producing a "sub-aperture image" that is recorded onto a group of cells on the sensor. A helpful diagram from the paper:


enter image description here


The conventional photograph that would have formed can be derived by summing the array of sub-aperture images for each pixel. But the point is that derivations become possible through the use of ray tracing computations. (Leonardo de Vinci would be envious.) In particular depth of field can be manipulated, thereby decoupling the traditional aperture/depth of field shackles. Lens aberration correction may be feasible as well.


The paper characterizes that the "total" light field, and "all" directions of light, can be captured, when in reality that would be limited by the number of microlenses, the sensor real estate under each one, etc. Of course like anything else, if enough resolution can be thrown at it, one could say "virtually all". So I suppose Plenoptic cameras would advertise pixel count AND ray count.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...