In another answer user Ilmari writes, in the context of preventing overexposure:
In general, I would also recommend always shooting RAW, both to better capture the full dynamic range of your camera, and also to avoid the ugly digital clipping of overexposed areas. For the latter, it helps to underexpose your shots a little (say, ...) and then pull the exposure up on your computer, ...
How does RAW prevent clipping of overexposed areas?
In general, I would also recommend always shooting RAW, both to better capture the full dynamic range of your camera, and also to avoid the ugly digital clipping of overexposed areas. For the latter, it helps to underexpose your shots a little [...] and then pull the exposure up on your computer.
OK, yeah, I was being a bit terse when I wrote that. Let me try to unpack it a bit.
Obviously, just switching from JPEG to RAW won't do anything to fix clipping on its own. What I was trying to suggest, when I wrote the paragraph above, is:
Deliberately underexposing your photos enough that the highlights won't clip.
Shooting in RAW, which has a higher dynamic range than JPEG, in order to preserve shadow detail for the next step.
Correct the underexposure in post-processing, using an algorithm that simulates soft "film-like" highlights instead of hard digital clipping. (I believe any decent RAW processor should have this feature built in; I know UFRaw does, and that's free software.)
Why go to all that trouble, instead of just shooting JPEG directly at default exposure? Well, basically (besides all the other reasons to shoot RAW), so that you can get this:
instead of this:
(Click images to enlarge.)
Of course, I cheated a bit by making both of these example image pairs from the same RAW files — the only difference is that I used "soft film-like highlights" mode for the first pair, and "hard digital clipping" mode for the second pair, simulating what I would've got if I'd shot them directly in JPEG with a longer exposure.
Note particularly the characteristic cyan sky on the top right in the clipped version of the first image, the unnatural flatness of the clipped highlights, and the general color distortions around them. (Pictures with bright white background elements, such as snow or clouds, tend to show this effect particularly prominently, but I didn't happen to find any good examples on this laptop. I may try to look for some better illustrations later.)
The reason for this flatness and color distortion is that, unlike the smoothly saturating light response curve of film, digital image sensors have an (approximately) linear response up to their saturation point, and then a sharp cutoff:
(Actually, the film response curve drawn above is somewhat misleading, in that turning the film negative into an actual positive image introduces another layer of nonlinearity at the low end of the response curve, typically resulting in a somewhat sigmoid combined response curve. But at least at the highlight end of the dynamic range, the curves above do resemble the actual light responses of film and digital cameras in a general way.)
In particular, in color photography, each color channel (red, green and blue) has its own response curve. With a digital sensor, this means that, as the brightness of the incoming light increases, one of the R/G/B channels will typically clip before the others, distorting the color of such partially clipped pixels.
Also, the flatness of the digital response curve above the saturation point means that, whereas overexposing film just compresses the highlights, any clipped highlights in a digital photo (whether RAW or JPEG) are just gone, and no detail can be recovered from them. Thus, the rule of thumb for digital photography is that, if you're not sure what the optimal exposure will be (or if you know that the scene you're shooting includes highlights that you don't want clipped), it's always safer to err on the low side. Sure, pushing up the brightness of an underexposed photo in post-processing will also amplify the noise in the image — but underexposing a little, and losing some shadow detail to noise, is still usually better than overexposing and losing highlights completely.
Of course, none of the above requires you to shoot RAW — you can push up the brightness of JPEG images e.g. in Photoshop just as easily. But compared to RAW, the JPEG format has two issues here:
JPEG only uses 8-bit color; that is, the smallest difference between two brightness levels it can store is about 1/256 of the difference between pure black and pure white. JPEG actually uses a non-linear color encoding, which helps somewhat, but the effective dynamic range of a JPEG image is still only about 11 stops (as opposed to the 8 stops one would get with a linear encoding). This is enough for displaying images on screen, but it's still less than the effective dynamic range of even low-end camera sensors, and it doesn't leave much room for adjusting the exposure to recover detail from the shadows.
Also, JPEG uses a lossy compression scheme designed to reduce image file size by discarding detail that the human eye cannot easily see. Alas, this compression tends to also throw away shadow details pretty aggressively — increase the brightness of a JPEG image too far, and you'll likely end up with an image full of color distortions and blocky compression artifacts.
A RAW file, in comparison, preserves the full dynamic range of your camera's sensor with no lossy compression, allowing you to post-process the image to the full extent possible (in this case, mainly limited by the sensor's noise floor).