Friday, 6 January 2017

Can we compare the color reproduction accuracy of 2 camera sensors only by looking its sensor spectral sensitivity curve?


I have the camera specifications from the manufacturer. The RGB quantum efficiency curve (I think it is also the spectral sensitivity curve) is provided. How can we compare these two cameras's quality of color reproduction directly from the curve? Or we need set up an experiment to do this? Thanks. enter image description here


enter image description here



Answer




How can we compare these two cameras's quality of color reproduction directly from the curve?


"which camera can get the RGB value closer to the true RGB value of the object, less RGB channel overlap"



A comparison of color reproduction potential based on quantum efficiency of the sensor filters is possible.


As others have mentioned, there are many contributing factors to the final color reproduction of a full color camera system. However, the sensor RGB sensitives are perhaps the largest contributing factor to color reproduction accuracy and their color reproduction performance can be measured.



What is true RGB


First we have to answer, what is the "true RGB" of a scene? A good definition of "true RGB" would be the relative responses of a human's three retinal cones to the scene. These cones are called LMS, long medium and short.


enter image description here


A spectrum of light integrated over these three sensitivity curves yields three LMS values that can be thought of as human RGB values, these are the target RGB values we want to reproduce with our camera if our goal is accurate color reproduction.


More commonly, we could also target the sensitivities of the XYZ color matching functions. These are linear combinations of the LMS functions so they are effectively interchangeable with the LMS functions.


enter image description here


Color Correction


In a digital camera, when a spectrum is integrated against the camera sensitivities (like the ones you posted) the resulting RGB values are called "camera RGB".


In most digital cameras there is processing step where a color correction algorithm (M) will be used to convert cameraRGB into humanLMS (or XYZ).


M(cameraRGB) = humanLMS



In this case, humanLMS will be a guess. It will not be perfect, and the difference between the guess and the real LMS value a human would have perceived is your color error.


Designing a good M is difficult because it is an under determined problem, some cameraRGB values have multiple potential humanLMS values (this is called metamerism) so it's not always possible to know exactly what the correct LMS is, but we can use natural image statistics and machine learning to make a guess at the most likely correct answer.


The most common implementation of M is a 3x3 linear transformation matrix, but if the camera sensitivities are not linear combinations of LMS then the transformation will contain errors. If the camera sensitivities happen to be linear combinations of LMS then the color error would be zero, this is called the Luther Condition. In practice digital camera sensitives never satisfy the Luther Condition so there is always color error.


Comparing Color Reproduction


There are now two factors that play into how accurate our LMS guesses are.


1) the design of our color correction algorithm M


2) how similar our sensor sensitivity curves are to the LMS sensitivities


This gets at the heart of your question: some sensitives will result in quantifiably more accurate colors than others because they are closer to the LMS sensitivities which makes it easier to guess the LMS value, which is the "true RGB" we desire



Or we need set up an experiment to do this?




What might be helpful is "ISO standard 17321, Sensitivity Metamerism Index". This calculates color reproduction accuracy based on spectral responses.


https://www.dxomark.com/About/In-depth-measurements/Measurements/Color-sensitivity http://www.iso.org/iso/iso_catalogue/catalogue_ics/catalogue_detail_ics.htm?csnumber=35835


This index tells you the average perceptual difference between colors recorded by your camera that have been linearly corrected by a optimized 3x3 matrix, and the known colors of a test scene.


The only problem is this procedure is done with a full camera so it's measuring the color error of the the sensor and the color correction matrix and the optics etc, not only the sensor.


If you truly only want to quantify the error of only two different sensors you could do the SMI procedure with the same camera and only change the sensor. Or instead of a physical experiment with a real camera you could simulate your camera in software and not include any optical or demosaicing contribution to the simulated cameraRGB values.


There are many papers on camera simulation for more info on that: http://color.psych.upenn.edu/simchapter/simchapter.pdf


"CIE Special Metamerism Index: Change in Observer" is another relevant standard meant for comparing color reproduction in humans with slightly varying spectral responses. I think you could apply this to camera spectra as well.


http://link.springer.com/referenceworkentry/10.1007/978-3-642-27851-8_322-1#page-1


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...