Tuesday, 27 June 2017

post processing - Why is my camera so forgiving for overblown exposure when shooting in RAW?


I've found that my camera (Sony A99) is very forgiving in terms of overexposure when shooting in RAW. By that I mean, I can overexposure by a stop or so and when I get home for the post processing I can underexpose it back and get all the details back.


Of course this wouldn't work in jpeg. There's no data beyond the right most area of the histogram. But not the same with RAW, data magically come back into the histogram. Why? Does the camera reserve an area of the histogram from me in case I make mistaken? If so, doesn't this mean some latitude is lost if I were to shoot correctly (perfectly exposed)?


Also, why is this only available for overexposure? But not underexposure? I don't think I'll be able to pull details out of crushed black areas.




Answer



This is one of the benefits you get from shooting raw.


You can't recover highlight or shadow detail from a JPEG because it has 8 bits of color depth per color component,1 and it's mapped so that the lowest pixel value is interpreted as "black," and the highest is "white." There simply is nothing below black or above white. The creators of JPEG did this because 8 bpc is adequate for humans to perceive a properly-exposed full-color image.2 The human eye has greater dynamic range than JPEG allows, but it can't see that full range all the time.3


Most raw-capable cameras are capable of capturing at least 10 bpc. 12 bpc+ is very common, and 14 bpc+ is possible with the best sensors. The trick is, how to make use of this additional dynamic range? There are several design spaces in which to find a solution:




  • Full range capture and display


    The camera's exposure meter could try to capture as much dynamic range as is physically possible, and it could attempt to display it all on the little screen on the back of the camera. Your raw processing software could likewise attempt to show you all of the dynamic range in the image file on screen. When saving a JPEG, the camera could just map this full dymamic range in the obvious way, effectively discarding the least significant bits from the sensor.


    No one does this.4


    If you take a picture of a backlit bush at sunset, the camera could attempt to capture the black ants in the dark gray shadow under the dense dark green foliage while at the same time capturing sun spot detail in the sun's disc.



    Cameras don't do this because the resulting image would look like striped mud. Human eyes don't have the dynamic range to see the ants and the sun spots at the same time, so human brains don't expect to see such things.5 We don't have display technology good enough to reproduce a physically correct image, either.6




  • Slice from the middle


    Instead, the camera could simply put its notion of "correct" exposure right in the middle of the range, and extract the 8-bit JPEG and the screen preview from the middle of the range. If your camera has a 12-bit sensor, it could effectively give you a ±2 stop exposure adjustment range, since every 1 bpc translates into 1 stop, in photographic terms.


    I don't think that this is an entirely bad way to go, but it wouldn't give the most pleasing imagery. Camera companies that did this wouldn't be selling many cameras.




  • Black point and gamma curve


    A much better plan is to pick a brightness level in the image to call black7 and then choose a gamma curve to remap the raw sensor data into that 8 bpc range.



    With this method, the camera and raw processing software can choose to leave some of the raw data outside the mapped range, so that the raw image file encodes blacker-than-black and brighter-than-white. This is the region you're pulling from when your raw processing software recovers highlight or shadow detail.




There is no universal authority mandating which method to use, and even if there were, there is plenty of variation in existing technology and still plenty more room for further variation. For example, Lossy DNGs use an 8 bpc color space, but the nonlinear way the input image data is mapped to output values, you still have a bit of dynamic range to work with outside the normally visible display range.




Footnotes:




  1. 8 bpc is also called "24-bit" by those who prefer to consider all three channels needed for color imaging together.





  2. At any single moment, the human eye has less dynamic range than you get from 8 bpc. The only reason we use even that many bits per channel is that computers like dealing with data in 8-bit chunks, as do digital displays. Any value a 7 bpc or 9 bpc variant of JPEG might have is wiped out by decades of historical inertia pushing us to stick with 8.




  3. If your eyes could use their full dynamic range all the time, you wouldn't have to squint for a while when walking outside from a dimly lit house at noon, or when turning on the bedside light when waking up in the dark.




  4. I have no doubt this has been tried several times in research labs. I'd even be unsurprised to learn that software has been made publicly available that does this. If I wanted to be precisely correct, I'd have to rewrite that sentence to something less punchy like, "No one has been commercially successful producing software or hardware that presents images using this method."





  5. This is part of the reason it's hard to make a good HDR.




  6. And if we did have such technology, you wouldn't be able to look at the sun in the reproduced image, any more than you could while taking the picture.




  7. Or white, if you prefer. It really doesn't matter. You can work the math either way.




No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...