Monday 7 January 2019

night - Why is it that when the green channel clips, it turns into blue?


I was taking photos of snow sculptures at night, and I noticed something strange on my Olympus O-MD E-M10 Mark II.


This green lit sculpture


enter image description here


when the green channel gets blown, turns blue


enter image description here


and then eventually turns white.


Why would the green channel clip this way?



Answer




The light you describe as "green" also contains components of "red" and "blue" light. They are much weaker than the green component, but they are there.


Once the exposure is bright enough for the green channel to be fully saturated, increasing the exposure further can not increase the value recorded in the green channel to more than 100%. If green is fully saturated at 1/100 second it will show the green channel at 100%. If we double the exposure time to 1/50 second, green will still be recorded at 100%. That is the maximum value that can be recorded for each channel.


Increasing the exposure further does increase the value recorded in the red and blue channels until exposure reaches a point for each where they too are fully saturated. Look at it this way: if there is 10X as much green as blue reflected by your sculpture, exposing ten times brighter than needed to fully saturate the green channel will result in fully saturating both the green and the blue channels. The camera will have no way of showing that green is 10X brighter than blue. It will show both channels at the same value: 100%.


When all three channels are fully saturated we get pure white. It matters not that there is much more green than red or blue light striking the sensor. As long as there is at least just enough of each color to fully saturate each color channel we will see that area rendered as white.


Also, the Bayer masks on digital sensors do not have hard cutoff points between colors: some green light gets through the red and blue filters, some red and blue light gets through the green filter, and so on.


enter image description here
The blue line shows what percentage of light along the entire visible spectrum is counted by the blue-filtered sensels of the Sony IMX249 sensor. The green and red lines show the same for green and red filtered sensels. Notice that above about 820nm all three are more or less equally sensitive. That is why digital sensors have an IR filter in the sensor stack. Also notice that the response of the red and green filtered sensels begins to increase as the wavelength moves below 420nm, which is why a UV filter is also included in the sensor stack.


It's much like when we use a color filter on the lens for shooting black and white film. If we use a red filter some of the light from green and blue objects still makes it through the filter. Those green and blue objects just appear darker than they otherwise would. But they do not become totally black.


So even if the light illuminating your sculpture was pure green, some of that light would get through the red and blue filters on your camera's sensor and be registered by the "red" and "blue" pixel wells. Overexpose bright enough and you will fully saturate all three channels.


From a comment:




That we can see blue objects through red filter doesn't necessarily imply that the filter passes significant amount of blue. It may just mean that the blue object has significant reflection in the red part of spectrum. E.g. #3f00ff color is also blue, but has non-negligible red component.



Regardless of the wavelength, light that passes through the red filter is included in the single monochromatic luminance value for the red filtered pixels. It matters not if the light is red, green, or blue - the photons allowed to pass into that sensel (pixel well) are all recorded the same. It's just that a higher percentage of the red light that falls on a red filter is allowed through than the percentage of blue light that falls on a red filter. But what gets through is counted as photons, not red photons or blue photons or green photons.


Essentially what we have with a raw file from a Bayer masked digital sensor is three monochrome images: One made up of half the sensor's pixels filtered for green, one made up of one quarter of the sensor's pixels wells filtered for red, and one made up of one quarter of the sensor's pixel wells filtered for blue. Just as with shooting black and white film with color filters, some light from the entire visible spectrum will make it through each filter. We can take three B&W prints filtered for the three color channels and combine them to produce a color print. Digital is the same principle. So are the way the cones in the human retina work.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...