I have a scientific application where I have a very bright image - basically a laser beam cross-section - and want to detect very small variations in brightness (around 0.01% or less) within it.
Under normal conditions with a linear sensor, if I set the exposure so that the maximum brightness was just below the saturation level and 'zero' represented total darkness, then I would need at least a 14-bit sensor to even see the sorts of variations I'm looking for (e.g. with a 14-bit sensor the theoretical maximum resolution in brightness is one part in 16,384 or 0.006%). To make a meaningful measurement of the variations I'd need at least a 16-bit sensor with a resolution of one part in 65536, or 0.0015%. In practice, however, I can't find a 16-bit sensor on the market at reasonable cost (I think there may be MFDBs with this resolution, but they have much greater spatial resolutions than I need and are prohibitively expensive).
In reality, however, I'm not at all interested in the region below 99.99% of maximum brightness, so what I'd really like to do is to use a standard camera's 8- or 10-bit range to cover the region from 99.99% to 100% of maximum brightness, in other words simply resetting the baseline so that anything below 99.99% of maximum appeared dark. Maybe I'm being thick-headed, but I can't think of a way of achieving this. Simply reducing the exposure time may make 99.99% to be 'dark' and 100% to be 'barely visible', but I'm not sure how I would then expand the range so that 100% became 'near saturation' again: just increasing the gain may work, but I'm sceptical because of the increase in noise this would entail.
With this many photons in play, there must be a way of doing it but I can't seem to figure it out. Does anyone have any suggestions?
No comments:
Post a Comment