Wikipedia says that the dynamic range is the "ratio between the largest and smallest possible values of a changeable quantity". Ok, I get that. I suppose that's why HDR photos have a "high dynamic range" with respect to light.
But what else is there to it? What's the dynamic range of a camera? Just tell me everything that's important about it :-).
Answer
Okay this may be very out of scale, but it's my best guess as a simple demonstration of light intensities. Also, the capabilities of the sensors might be less or more. But you'll get the idea.
The reason why dynamic range is so important is because it defines precisely how much of a scene can actually be represented within the bounds of the image's "black" and "white". The image above represents a very rough scale of how bright typical items in a scene are, whilst the 'brackets' on the right give a rough indication of how much of those intensities can be seen in detail at a given exposure. The shorter the exposure the higher your bracket goes (small exposure for bright clouds), the longer the exposure the lower it goes (longer for shadowed/night scenes).
Of course in real life there really isn't a black and white. Black would be the complete absence of light and white would an infinitely large amount of white light in all frequencies. But when it comes to photographing and vision, you're not working with such a high dynamic range.
The difference? If you expose a point and shoot to have the same white clipping point within the scene's light intensity, the point where black occurs may be brighter than the blacks in a digital SLR's image. This is because the much larger sensor is able to capture a greater variation in intensity of light. It's white point is brighter and it's black point is darker than the point and shoot. It sounds as if you understand this part.
Why is it important? What happens when you wish to see both the bright clouds in a scene but also the dark shadow areas inside the house through the back door? In most cases either the clouds will turn out bright white and you won't be able to see any detail, or the inside of the house will simply be black (or very close). For the camera it falls out of the current range of intensities you're exposing for.
This is one of the shortcomings of photography in relation to the performance of the eye. The human eye is typically able to see a far greater range of intensities than a camera, typically around 18 to 20 stops of intensity variation. We can see in the house and the bright clouds but the camera can only expose for one or the other. Most DSLR sensors can capture around 10-13 stops of dynamic range.
Furthermore, the format the image is captured in (for digital photography) can allow for a significant amount of the dynamic range to be retained when converting the image into a usable JPEG, as it is the most common "final" format a photo ends up in.
With a JPEG, the format that a point and shoot will typically generate for you, each component of red, green and blue can only store 8 bits of accuracy. Black is 0, white is 255. This means there are 256 "steps" between black and white. Conversely, with high accuracy raw capture, these are typically capturing 12 to 14 bits of information. For 12-bit raw, black is still 0, but white is 4,096. In 14-bit capture, the white point is 16,384. What this means is that the variations in intensity are captured orders of magnitude more accurately. There are now up to 16,384 "steps" between the image's black and white points.
Even though you typically end up exporting to this 8-bit JPEG format, this allows the photographer before hand to adjust exposure, fill light and recover blown highlights far more accurately than if it was attempted with the final JPEG image. Not only can this allow you to "save" photos from the bin, it can also vastly improve the result you get out of well captured photos. One technique exploiting this is Expose to the Right.
Furthermore #2: I think the biggest thing to note in relation to digital dynamic range is that for a given ISO setting, the SNR in a full frame sensor will be far greater than a point and shoot. At the same exposure, the "big bucket" photo sites in a full frame sensor allow more light to still fit into the range of the sensor. So +13 EV will still be registered whereas on a point and shoot it would simply be pure white, for example.
It's like having a 1L tin to capture water instead of a 500mL tin in a point and shoot.
Furthermore #3 (with added photos): Here is an example of just how limited some sensors can be.
This is what my iPhone produced. The first I exposed for the dark area down on the street. The second is exposed for the bright buildings and the third is a "HDR" image produced by the iPhone. With some tweaking the shadow area can be made to approximate the dynamic range of what I actually saw, though it's still limited.
Clearly the dynamic range is too limited in the iPhone to capture all information you need at once. At one end, the whites just completely blow out and at the other, the shadows are almost completely black.
No comments:
Post a Comment