Thursday 21 April 2016

camera basics - Why are the color spaces we have access to incomplete?


The question, then:


If all colors are combinations of red, green and blue, and my monitor's pixels use all three, why is its color space limited to so small a portion of the actual complete color space? What colors are we NOT seeing and why?


Similarly, if a camera captures all three, why can it not capture the entire visible color space?


It's that last bit that may differentiate this question from the one referenced. It's one thing to know that there are a few practically available spaces smaller than and contained by the visible space. But it's perfectly possible to know that and have no idea how to explain what colors are in the technologically accessible spaces and which aren't. And since those spaces are bounded, there has to be a logic to what's in them and what isn't. I'd love to be able to answer that - what colors do I see in the world that I can't ever see on a screen or a printed image (using one of the color spaces smaller than the visible color space)?



Answer




why is its color space limited to so small a portion of the actual complete color space?




Because the "red", "green" and "blue" which your monitor uses are pale, probably not noticeable but still pale. You would probably not be surprised if your monitor used distinguishably pale colours and was said to have small colour space.


No matter how pale the "red", "green" and "blue" (and ANY other set of three different colours) are, it is always possible to reproduce a colour with them if you may have negative amount of each. But, this is not possible physically.


No matter how saturated the "X", "Y" and "Z" are you cannot practically reproduce arbitrary visible colour with them, even if they are monochromatic (fully saturated), see reasoning below.



Similarly, if a camera captures all three, why can it not capture the entire visible color space?



Because of Luther-Ives conditions. (May be called Maxwell-Ives criterion in other places)


It is not entirely correct to say that digital camera does not capture entire visible colour space until you define what does it mean to capture entire visible colour space. It's not that camera does not capture some colours (all digital cameras are likely to produce different positive reponse to every possible wavelength between 400 and 700 nm), the problem is that cameras break human metamerism rules - camera maps different series of input SPDs to same response. It means that every camera produced will respond to some pair of SPDs(many of them in fact) equally while they won't be observed as equal and vice versa: it will respond to some pair of SPDs differently while they are observed as equal.


Here's an example of trying to deduce true colour from Nikon D70 data taken from http://theory.uchicago.edu/, it is some optimal camera response transformed to XYZ space:Nikon D70 CIE best fit


This graph shows how well colours can be reproduced. Knowing that a CIE XYZ is a space of imaginary super-saturated colours you can see that colour reproduction accuracy is a trainwreck. And to top it off D70 image data gets clipped from the bottom (negative values) when transformed to XYZ space - which is in a sense the gamut limitation because XYZ is usually the widest colour space used after RAW processing. The negative values are lost forever (if they ever were useful).




I'd love to be able to answer that - what colors do I see in the world that I can't ever see on a screen or a printed image (using one of the color spaces smaller than the visible color space)?



Look at any CD or DVD under bright light and you will see colours which won't be printed or displayed using consumer technology in nearest future.


Regarding prediction: if you mark x and y chromaticities of primaries (which is the exact term for "red", "green" and "blue") of some device or colour space onto this graph you will see which parts of colour space the space does not favour. An example of doing this with sRGB, the common colour space of modern LCD. Following chromaticities are marked on mentioned example. The colours which output device may reproduce lie within the smallest convex polygon containing all marked primaries.


This is why you can't reproduce all colour space with three colours - the visible colour space cannot be matched with a triangle lying inside the convex curved figure. To display all visible colours you need all of the spectrum.


Another demonstration: there are sensitivity graphs in the article about LMS space (they are approximation of human eye cone responses). If you take wavelengths x, y, and z (x1, x2, x3, ..., z3 being LMS response for x, y, z), and if you take any fourth wavelength w=(w1,w2, w3) and try to solve the equation system w=a*x+b*y+c*z the solution (a, b, c) (the amount of each colour needed to reproduce w) will contain at least one negative number no matter which w. you pick. The curved drawing of visible colour space is just an illustration for that. You may use XYZ, CIE1931 or any other space's colour matching function as well, this will yield same result. Here is an Excel spreadsheet for quick experiments.


SPD - spectral power distribution.


P.S. It is also worth mentioning that artificial reproduction limits not only saturation but brigtness and darkness too, but that is completely another story, and I yet have to see any progress in technology other than incremental which may solve this problem.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...