Monday 8 May 2017

sensor - RAW files store 3 colors per pixel, or only one?


Ken Rockwell says that camera makers consider the individual R/G/B sensors when they talk about megapixels. So the image below would be a 6x6 pixel camera, not 3x3 as you would imagine.



enter image description here


If that's true, a RAW file would contain only one color information per pixel (be it R, G or B) as a 10, 12 or 14 bit number.


My confusion comes as I read in some places stuff like:



  • RAW files store an avarage of the two green sensors per pixel.

  • RAW files use 12 bits per pixel, but there are 3 colors so that's actually 36 bits per pixel.


Which would obviously be false, if Ken's claiming is correct.


So what's the truth?



Answer




Raw files don't really store any colors per pixel. They only store a single brightness value per pixel.


It is true that with a Bayer mask over each pixel the light is filtered with either a Red, Green, or Blue filter¹ over each pixel well. But there's no hard cutoff where only green light gets through to a green filtered pixel or only red light gets through to a red filtered pixel. There's a lot of overlap. A lot of red light and some blue light gets through the green filter. A lot of green light and even a bit of blue light makes it through the red filter, and some red and green light is recorded by the pixels that are filtered with blue.


color response


Since a raw file is a set of single luminance values for each pixel on the sensor there is no actual per-pixel color information to a raw file. Color is derived by comparing adjoining pixels that are filtered for one of three colors with a Bayer mask. But just like putting a red filter in front of the lens when shooting black and white film didn't result in a monochromatic red photo (or a B&W photo where only red objects have any brightness at all), the Bayer mask in front of monochromatic pixels doesn't create color either. What it does is change the tonal value (how bright or how dark the luminance value of a particular color is recorded) of various colors by differing amounts. When the tonal values (gray intensities) of adjoining pixels filtered with the three different colors used in the Bayer mask are compared then colors may be interpolated from that information. This is the process we refer to as demosaicing.


A lot of math is done to assign an R, G, and B value for each pixel. There are a lot of different models for doing this interpolation. How much bias is given to red, green, and blue in the demosaicing process is what sets white/color balance. The gamma correction and any additional shaping of the light response curves is what sets contrast. But in the end an R, G, and B value is assigned to every pixel. In your 6x6 pixel example in the question the result of demosaicing would be a 36 pixel image with 36 pixels that each have a Red, a Green, and a Blue value.


A little bit of resolution is lost in translation. It turns out that in terms of the number of alternating black and white lines per inch or mm that can be resolved by a sensor with an RGGB Bayer mask and well-done demosaicing the absolute resolution limit of a Bayer sensor is about 1/√2 compared to a monochromatic sensor that has no Bayer mask and thus needs no demosaicing (but can only see in Black & White).


Even when your camera is set to save raw files, the image you see on the back of the LCD screen of your camera just after you take the picture is not the unprocessed raw data. It is a preview image generated by the camera by applying the in camera settings to the raw data that results in the jpeg preview image you view on the LCD. This preview image is appended to the raw file along with the data from the sensor and the EXIF information that contains the in-camera settings at the time the photo was shot.


The in camera development settings for things like white balance, contrast, shadow, highlights, etc. do not affect the actual data from the sensor that is recorded in a raw file. Rather, all of those settings are listed in another part of the raw file.


When you open a "raw" file on your computer you see one of two different things:





  • The preview jpeg image created by the camera at the time you took the photo. The camera used the settings in effect when you took the picture and appended it to the raw data in the .cr2 file. If you're looking at the image on the back of the camera, it is the jpeg preview you are seeing.




  • A conversion of the raw data by the application you used to open the "raw" file. When you open a 12-bit or 14-bit 'raw' file in your photo application on the computer, what you see on the screen is an 8-bit rendering of the demosaiced raw file that is a lot like a jpeg, not the actual monochromatic Bayer-filtered 14-bit file. As you change the settings and sliders the 'raw' data is remapped and rendered again in 8 bits per color channel.




Which you see will depend on the settings you have selected for the application with which you open the raw file.


If you are saving your pictures in raw format when you take them, when you do post processing you'll have the exact same information to work with no matter what development settings were selected in camera at the time you shoot. Some applications may initially open the file using either the jpeg preview or by applying the in-camera settings active at the time the image was shot to the raw data but you are free to change those settings, without any destructive data loss, to whatever else you want in post.


Canon's Digital Photo Professional will open a .cr2 raw file in the same Picture Style as was selected in camera when shot. All you have to do to change it is use the drop-down menu and select another Picture Style. You can even create a "recipe" for one image and then batch apply it to all of the images before beginning to work with them. Other manufacturer's raw processing software is similar and there's usually an option to have the application open an image with the in camera development settings applied.



With third party raw processing applications such as Adobe's Lightroom or Camera Raw, Apple's Aperture or Photos, PhaseOne's Capture One Pro, DxO Lab's OpticsPro, etc. getting images to display according to the in camera settings can be a bit trickier. Adobe products, for instance, ignore most all of the maker notes section of a raw file's EXIF data where many manufacturers include at least some of the information about in camera settings.


¹ The actual colors of the Bayer mask in front of the sensors of most color digital cameras are: Blue - a slightly violet version of blue centered at 450 nanometers, Green - a slightly bluish version of green centered on about 540 nanometers, and Red - a slightly orange version of yellow. What we call "red" is the color we perceive for light at about 640 nanometers in wavelength. The "red" filters on most Bayer arrays allow the most light through at somewhere around 590-600 nanometers. The overlap between the "green" and "red" cones in the human retina are even closer than that, with "red" centered at about 565 nanometers, which is what we perceive as yellow-green.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...