Friday 11 October 2019

dynamic range - Is 14-bit RAW better than 12-bit RAW?


A regular JPEG image has only 8 bits to store information about the tone of each pixel. When storing the image in RAW format (for example, DNG), we can store tone using more bits per pixel, which gives us a wider range and more options for processing on the computer.


My current camera can record pictures as 12-bit DNG, and I normally use RAW. I've noticed that newer models of DSLRs are able to store 14 bits per pixel. To me it looks like a huge advantage to get those 2 more bits, but in reality is it a big difference? Would I see the difference in post-processing? Would the difference be more on the darks (underexposed) or highlights (overexposed) parts of the image?



Answer



It makes some measurable difference but does not tell the whole story. DxOMark's portrait score is a technical assessment of the output of various cameras specifically in terms of color depth, which they carefully describe as having a "correlation" with color sensitivity, which is the actual nuance in color.


If you look at the results of that metric, you can see that the top-scoring cameras have 16 bits per pixel, followed by those with 14 bits per pixel. The expensive medium-format digital backs get DxOMark scores of 24-26 or so, followed by the very top SLRs with a range of 23-25. Then, the cameras with 12-bit/pixels come in next — I think the top one is 22-point-something.



But note that DxOMark describes a difference of 1 in this score as "barely noticeable". That's if you're barely noticing very carefully. For most people, much larger differences in score aren't noticeable either in real-world results.


Impact on the real world and final perception are one reason it's not a huge deal. But there's more! If you go further down the list, you'll find older cameras with 14-bit depth and lower scores than newer 12-bit cameras. So that number alone doesn't tell the whole technical story either. Newer sensor and processing tech improves real results in other ways. If you're comparing current generations, more depth is better, but don't assume that it's everything.


As for whether this gives you more room in the shadows or in the highlights: it's not really that the bits are added at either end — instead, there's just more gradiation. Imagine one newspaper gives movies one to four stars, while another uses a 1-10 scale. A "10" from the second newspaper isn't necessarily a lot better than a four star review from the first, but the additional "bits" allow for more nuance. This is the same idea.


These sensors still suffer from harsh cut-off of highlights, so as always with digital it's best to expose so those are retained and pull detail from the shadow: and yeah, better depth will help that to some degree, if you want to post-process to brighten dark areas, since there will (in theory) be more nuance to stretch out.


An important thing to realize is that the 12 or 14 bits from the sensor, while JPEGs use a gamma curve which fits with human perception. That's not just a way for JPEG to compress data — a curve has to be applied in order for the image to look right. Since this curve does "squish" the bits, that's part of the reason there's less of a perceptual difference than one might expect. (But having that linear data in the un-curved form is part of what gives RAW it's flexibility: it's easy to choose a different curve.)


My overall point, though, is that I wouldn't look at the underlying number to make a decision between two cameras. Instead, look at the final results.




Another external reference, presenting the same point of view, from the American Society of Media Photographers "Digital Photography Best Practices and Workflow" web site's section on sensors:



At the time of this writing [n.b. 2009 or earlier], no 35 mm DSLR cameras that have 14-bit capture ability clearly show an image quality advantage over 12-bit capture.



Some medium-format sensor makers claim an advantage with 16-bit capture. However, we have never seen a study (other than the manufacturer’s) that shows higher bit depth translates into higher image quality based on 16-bit capture alone. In general, the difference between 14-bit and 16-bit capture would not be visible (to humans anyway) unless a severely steep tone curve was applied to the image (on the order of 6-7 stops).



(Emphasis added. Thanks to an earlier answer from Aaron Hockley for the pointer.)


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...