Sunday 18 March 2018

terminology - What does it mean when a TV "supports HDR"?


I am a bit confused about how HDR (on TV) works. The way I see it, multiple images of the same frame are used to create one image which shows better details in low-light and high-light areas of the frame.



Why does the TV have to do anything about this? Shouldn't all movies that have HDR be preprocessed in this way so the resulting movie looks exactly like it should? Or does HDR on TV really mean that these multiple images per frame are available to the TV and the TV figures out how to combine them?



Answer



The answer by James Snell is pretty severely incorrect to say that photo and display HDR are unrelated. They both refer to the exact same thing: higher bit depth. When you create an HDR image, you use the multiple exposures to create a true HDR image that has a higher bit depth than a normal display can reproduce. Because of this limitation, you normally tonemap the actual HDR image back down into the low dynamic range space of a normal display.


What an HDR display allows is to display the HDR image natively without the tonemapping step (assuming the bit depth of the display is greater than or equal to that of the image), whereas on a normal display you're really only able to see a simulated HDR image.


tl;dr HDR = higher bit depth in both contexts. It's not a different meaning at all.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...