Saturday, 27 January 2018

lighting - Camera with good linear light response for photometric accuracy?


I want to photograph rooms and spaces indoors, and covered areas outdoors, and get good measurements of illumination. Light sources will be sun, sky, and artificial. Another use is to photograph materials side by side with a variety of reflectivities, to get accurate measures of those reflectivities.


I can handle the physics - watts per steradian square meters and all that. I just need a camera where I can be sure pixel values are proportional to physical illumination - no built-in gamma correction or curves or other enhancements etc.


I could use RAW but I'd prefer to use ordinary formats for smaller size. Of coure 8-bit/channel formats will give me only 256 distinct values; I can live with that, since I can widely bracket exposures. There is no motion to be concerned about.


Which off the shelf cameras are most suitable for this use? Or alternatively, how to test a given camera for linearity and accuracy?



Answer




It sounds like you need a scientific imaging device. I was told when I worked with these things that scientific grade CCD imaging devices are the most linear devices known to man, in contrast to the imagers discussed by @Guffa. I'm talking about cameras made by photometrics, pco (the sensicam), or devices made for astrophotography or microscopy.


These imagers are distinct from commercial grade imaging devices in that:



  • No lens. You have to supply that; this is a pure detector. The mount is typically C or F mount.

  • There are no hot pixels or cold pixels (at least in the $20k/chip range). If there are, return to the manufacturer for a replacement.

  • A few years back, 1280x1024x8fps was considered very good. Maybe they've gotten larger since then, I don't know.

  • You can bin (combine pixels to increase the sensitivity of the device, and decrease the spatial resolution).

  • The logic for reading pixels from the device is very good. On older (over ten years) devices, there was a slight error when moving pixel values from one pixel to the next to read out the value at the Analog/Digital converter at the edge of the chip. That error is essentially zero in modern devices. Contrast this with CMOS imagers, where the readout happens on each pixel (and so the A/D conversion may not be the same from pixel to pixel).

  • The chip is cooled, usually to -20 to -40 C, so as to minimize noise.

  • Part of the manufacturer's specification is the Quantum Efficiency, or the percentage chance that a photon will be converted to an electron and recorded. A backthinned CCD might have a QE of around 70-90% for a green (450nm) photon, whereas others might be more in the 25-45% range.


  • These imagers are pure black and white, recording a spectrum that is indicated by the manufacturer and can go into the IR and UV ranges. Most glass will cut UV (you have to get special glass or quartz to let it pass), but IR will probably need some more filtering.


The sum of these distinctions means that the value of each pixel correlates very highly with the number of photons that struck the physical location of the pixel. With a commercial camera, you have no guarantees that pixels will behave the same as one another (and in fact, it's a good bet that they don't), or that they behave the same way from image to image.


With this class of device, you'll know the exact amount of flux for any given pixel, within the boundaries of noise. Image averaging then becomes the best way to handle noise.


That level of information may be too much for what you want. If you need to go commercial grade, then here's a way to go:



  • Get a Sigma imaging chip (Foveon). These were originally made for the scientific imaging market. The advantage of this chip is that each pixel is red, green, and blue overlapping each other, rather than using a Bayer sensor, where the pixel pattern is not overlapping.

  • Use this camera only at iso 100. Don't go to the other iso's.

  • Place the camera in front of a light source of known output at a known distance. The flatter this illumination (ie, goes from edge to edge of the camera), the better.

  • Record images at a given exposure time, and then either modify the exposure time to change the apparent flux at the sensor, or change your light source.


  • From this set of images, create a curve that shows the average pixel value in red, green, and blue for a known flux. That way, you can translate pixel intensity to flux.

  • If you had a completely flat illumination profile, you can also describe the behavior of your lens viz edge dropoff.


From here, you can take a picture of a room (or something else) in controlled conditions where you know what the answer is and validate your curves.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...