Wednesday 27 June 2018

How exactly is the deeper bit-depth of RAW mapped onto JPEG and the display?


I am trying to understand RAW better.


I have a Canon EOS 20D, and shoot in RAW+Jpeg mode. According to the specs in the manual, the RAWs of the 20D are 12 Bit. I understand that this means each pixel contains 36 bit of information. A Jpeg only has 3*8=24 bit of information.



In RAW+Jpeg mode, the 20D actually generates two Jpegs: one in full resolution (3504x2336), and one in down scaled resolution (1536x1024) that is embedded in the RAW file for preview purposes.


Sorry, have to post a whole battery of questions, don't know how to summarize my question, so here it goes:


How exactly are the 36 bit of the RAW mapped to the 24 bit of the full resolution Jpeg? Does it just take the 24 bit in the middle of the 36 bits, or at the beginning, at the end or what? Or is there a more sophisticated mapping going on?


Is the mapping the same for the separate full resolution Jpeg and the embedded preview Jpeg?


When I open a RAW in Raw Therapee, it again needs to be mapped down to 24 bit to be displayed at the screen. Is this again the same mapping or a different one?


Also, the RAW images always look very flat and drab, with very dim colors. (Only with Raw Therapee I can bring out a pop and vibrancy which I love from film). The fact that the RAWs and derived Jpegs always look so drab without post-processing, is this related to the bit reduction mapping, or has it different reasons?



Answer



First, you are making a common mistake thinking it is 36 bit. I made the same mistake for a while. In reality, RAW data is monochrome and thus only 12 bit in your case since each pixel doesn't have any color information without looking at neighboring pixels.


Beyond that, it depends on the software being used. Color, as mentioned, is derived from the color of the filter on that pixel and the value of neighboring pixels of other colors, but the pattern used can vary.


Similarly, the reduction in bit depth varies even more. It could be a linear map that brings the darkest to darkest and brightest to brightest. It could just grab the middle. It could try to make processing judgements about what a dark black point and what a bright white point should be and adjust according to that. It really depends on how the software decides to do it and then how you adjust the mapping during development.



And that's really the point of RAW. It's designed to allow you to make selections about how to do that mapping as the photographer. If you just want an automatic process to form an 8 bit file for you, simply shoot JPEG. Using RAW is a waste of space. The point of RAW is that it lets you control the process of converting it to an 8 bit space by hand, and thus ensures you get the information you want out of it.


As for why it seems drab initially, it is probably just a stylistic thing for how the logic works. With Lightroom, it tries to make choices to make it look much more like a JPEG would by default, but adjustments are still needed in either case. That initial adjustment is going to vary from software to software and camera to camera and even photo to photo potentially.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...