Thursday, 23 August 2018

Why does an X megapixel sensor produce X MB of data (in image files)?



  • Suppose I have 1 mega pixel sensor, it means I have 1*10^6 (1 mega) pixels.

  • If and only if each pixel represent the density of his color in 8 bit depth, so 8 bit = 1 byte, means each pixel is 1 byte

  • Then we have number_of_pixels*byte = 1*10^6*byte = 1 Mega Byte of data.



So why when most of the sensors are far beyond 8 bit depth, we still have image files with size very close to the number of mega pixels we have on the camera?



Answer



To start with, the sensor doesn't output any color. Each pixel only records a single value: how much light struck the sensor. The number of bits determines how fine the steps between each brightness level can be. That's why a 12-bit or 14-bit file can record much finer gradations of lightness than an 8-bit file.


But raw files are also compressed, just normally in a lossless manner. If there are fewer unique values from all of a sensor's pixel wells the data can be compressed smaller than if there are more of the 2^12 or 2^14 possible tonal values for each pixel. Raw files from my 24MP camera generally run anywhere from 22MB to 29MB each depending on the content. Some cameras even use lossy compression to store raw files.


The way color is derived is by filtering each pixel for one of three colors: Red, Green, and Blue. But all that is measured on the other side of the filter by that pixel well is how much (i.e. how bright) light was allowed to pass through the filter. The filters still each let some light through that are colors other than the exact color of the filter. The further a color is from the color of the filter, though, the less amount of that color falling on the filter will make it through and be recorded by the pixel well. Some green gets past both the red and blue filters. Some red and blue get past the green filter. By comparing the difference in brightness of adjacent and surrounding pixels filtered for different colors the process known as debayering or demosaicing can interpolate an R, G, and B value for each pixel. Only after the color has been interpolated will the value of each color for each pixel be stated using 8-bits per color for 24-bits per pixel. In the case of JPEG this data will also be compressed. Basically JPEG designates which pixels are all the same exact combination of all of the different combinations of R,G, & B contained in the image. That is why images that are mostly the same uniform colors can be compressed smaller than images that have almost every possible combination of colors.


If you output a 28-30MB raw file from a 24MP camera after debayering it into a 16-bit TIFF the file will very likely be over 100MB in size because it is recording 16-bits for each of three colors for each pixel.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...