Saturday 18 May 2019

dslr - How would I go about using my SLR to measure the 'greenness' of a photo?


Background


My digital pictures can be read into a computer program like Matlab or R as a m x n x 3 matrix where m x n is the number of pixels observed by each of the three (red, green, and blue) sensors, and each cell in the matrix has a number from 1-255 that reflects the brightness observed by the sensor.


I would like to use this information to obtain an objective measure of greenness in a photograph, because I want to attempt to correlate greenness to plant growth (imagine one picture per day of a corn field).


Previous work in this direction has had some success by calculating an index of green either as



  • green % = green/(blue + red) or

  • green divergence = 2*green - red - blue


from webcam images for each of the m x n pixels, but there was no control over the aperture or incident radiation (solar angle).



note that I am not looking for an 'absolute' measure of greenness, the scale and distribution of the number does not matter - it just has to provide a consistent relative measure of greenness.


Question


Can I use my SLR to get a robust measure of greenness that is invariant with any or all of the following:



  • cloud cover?

  • time of day?

  • day of year? (this is the only requirement)

  • proportion of sky / ground in the background?


Current Status



I have come up with the following ideas, but I am not sure which would be necessary, or which ones would have no effect on the ratio of green/(red + blue)



  1. take a picture of a white piece plastic, and use this image to normalize the other values

  2. Fix aperture

  3. Fix shutter speed

  4. set the white balance using a white piece of paper

  5. Take all photos from the same angle

  6. Take all photos at solar noon



Answer




If you can process the RAW files, you'll have a bayer pixel array comprised of RGRGRG and GBGBGB rows (or possibly RGBGRGBG rows.) You could ignore all the R and B pixels, sum up the G pixels, take the square root (since there are twice as many green pixels as there are red or blue), and divide by half the number of G pixels. That should give you the proper weighted average for "green" in your photo. You could then take the average of red and blue, and compute your green percentage from all three averages.


To be more accurate, you might want to factor in the proper weighting for red, green, and blue sensor pixels, since CMOS sensors have different sensitivities to each wavelength of light. The weights would depend on the sensor, generally. That would be the simple approach.


To account for color cast due to time of day, various types of artificial lighting, etc. then it might be more appropriate to preprocess each photo in a tool like Lightroom to correct white balance first, then perform your computation on standard RGB pixel images. Unlike processing RAW sensor data, you would want to weight your calculation based on pixel "green purity", rather than average the green component overall. The more pure green a pixel is, the higher its weight vs. pixels that are more red or blue. Normalizing white balance before processing should eliminate any need to complicate an otherwise fairly simple computation with tangents designed to account for umpteen factors like cloud cover, time of day, season, etc.


You might still want to account for large areas of non-incident pixels, such as sky. I can't really help you much in that area without knowing more about exactly what you are trying to achieve. Greenness of a "photograph" overall would probably be best served by computing the ratio of green to red and blue, which would include "sky" pixels.


As for your procedure, it should go without saying that if you take the pictures with the same camera settings, under the same illuminant (same intensity and color temperature), metered against a common baseline such as an 18% gray card, will obviously go a long way towards normalizing your results. With digital, any discrepancies can be corrected with RAW processing software and a basic white balance picker tool, so be sure to shoot in RAW.




To provide some more insight into calculating "greenness" of your photos. There are obviously the simple ways, such as calculating the weight of green bayer pixels vs. blue and red, or calculating green purity in relation to red/blue purity of RGB pixels. You might have more luck if you convert to a more appropriate color space, such as HSV (Hue/Saturation/Value, sometimes called HSB, replacing Value with Brightness), and compute your green amount using a curve in HUE space. (NOTE: HSL is a different type of color space, and would probably not be ideal to compute how much "green" is in a photo, so I would use HSV. You can learn more about these color spaces here.) Pure green (regardless of saturation or value) falls at a hue angle of 120°, and fall off from there as you move towards red (at 0°) or towards blue (at 240°). Between 240° and 360°, there would be zero amount of green in a pixel, regardless of saturation or value.


Hue Plot - Green Purity in Hue Degrees
Fig 1. Hue Plot - Green Purity in Hue Degrees


You can adjust the actual weighting curve to meet your specific needs, however a simple curve could be similar to the following:



range = 240
period = range * 2 = 240 * 2 = 480
scale = 360/period = 0.75
pureGreen = sin(scale * 120)

The value for pureGreen should be 1.0. A formula for computing greenness could then be done as follows:


             sin(scale * hue)   } 0 > hue > 240
greenness =
0 } 240 <= hue <= 360 || hue == 0


The hue is the degree of color from your HSV color value. The radius is the half of period in which green is present to some degree. The scale adjusts the sin curve to our period, such that sin(scale * hue) peaks (returns 1.0) exactly where you would have pure green (ignoring that greens intensity). Since the amount of greenness is only valid in the first half of our period, the greenness calculation is only valid when hue is greater than 0° and less than 240°, and its zero for any other hue.


You can adjust the weighting by adjusting the period, the range within which you define green might be present (i.e. rather than from 0 to 240, you might set a constraint like 40 > hue > 200 instead), and define anything outside of that range to have a greenness of 0. It should be noted that this will be mathematically accurate, however it may not be entirely perceptually accurate. You can of course tweak the formula to adjust the point of pure green more towards yellow (which might produce more perceptually accurate results), increase the amplitude of the curve to plateau and expand the band of pure green to a range of hue, rather than a single hue value, etc. For total human perceptual accuracy, a more complex algorithm processed in CIE XYZ and CIE Lab* space might be required. (NOTE: The complexity of working in XYZ and Lab space increases dramatically beyond what I've described here.)


To compute the greenness of a photo, you could compute the greenness of each pixel, then produce an average. You could then take the algorithm from there, and tweak it for your specific needs.


You can find algorithms for color conversions at EasyRGB, such as the one for RGB to HSV:


var_R = ( R / 255 )                     // Red percentage
var_G = ( G / 255 ) // Green percentage
var_B = ( B / 255 ) // Blue percentage

var_Min = min( var_R, var_G, var_B ) //Min. value of RGB
var_Max = max( var_R, var_G, var_B ) //Max. value of RGB

del_Max = var_Max - var_Min //Delta RGB value

V = var_Max //Value (or Brightness)

if ( del_Max == 0 ) //This is a gray, no chroma...
{
H = 0 //Hue (0 - 1.0 means 0° - 360°)
S = 0 //Saturation
}
else //Chromatic data...

{
S = del_Max / var_Max

del_R = ( ( ( var_Max - var_R ) / 6 ) + ( del_Max / 2 ) ) / del_Max
del_G = ( ( ( var_Max - var_G ) / 6 ) + ( del_Max / 2 ) ) / del_Max
del_B = ( ( ( var_Max - var_B ) / 6 ) + ( del_Max / 2 ) ) / del_Max

if ( var_R == var_Max ) H = del_B - del_G
else if ( var_G == var_Max ) H = ( 1 / 3 ) + del_R - del_B
else if ( var_B == var_Max ) H = ( 2 / 3 ) + del_G - del_R


if ( H < 0 ) H += 1
if ( H > 1 ) H -= 1
}

No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...