Wednesday, 28 November 2018

raw - How many bits of data are typically actually captured by a digital camera sensor?


In a comment on this question someone suggested that camera sensors typically only output 12-14 bits of data. I was surprised because that would mean that 24 bits of color is only useful for doing photo manipulation (where the added bits reduce the noise one picks up from interpolating middle values repeatedly doing multiple manipulations).


Does anyone know enough about camera sensor to be able to authoritatively answer the 12-14 bit claim? If so, what are typical encodings?



Answer



The photosites of a digital sensor are actually analog devices. They don't really have a bit depth at all. However, in order to form a digital image, an analog-to-digital converter (A/D converter) samples the analog signal at a given bit depth. This is normally advertised in the specs of a camera — for example, the Nikon D300 has a 14-bit A/D converter.



But keep in mind that this is per channel, whereas 24-bit color usually means 8 bits per channel. Some file formats — and working spaces — use 16 bits per channel instead (for 48 bits total), and some use even more than that.


This is partly so the extra precision can reduce accumulated rounding errors (as you note in your question), but it's also because human vision isn't linear, and so the color spaces we use tend to not be either. Switching from a linear to "gamma compressed" curve is is a lossy operation (see one of the several questions about files), so having more bits simply means less loss, which is better if you change your mind about exposure/curves and don't have access to the RAW file anymore.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...