Saturday, 23 February 2019

workflow - What is the optimal order of post-processing steps?


I know it's best to do as much post-processing before converting from RAW, but in cases where it's not possible - what is the optimal order of post-processing steps (like noise removal, dust-spot removal, color correction, brightness/contrast correction, straighten, distortion/aberration removal, selective edits, sharpening, resize, color space and bit-depth change, etc)?


When I say optimal order I mean the order that will result the least banding, clipping, halos and other digital artefacts. I'd also like to understand the reasons behind some particular ordering. Is it different for prints and web output?



Answer



Several of the operations you're describing manipulate the data in the image such that information is lost or transformed. For the most part, I don't think this matters with traditional photography (ie, prints and the like), but it definitely matters when each pixel is considered a measurement of the number of photons.


What I think about when I do operations is the propagation of error. Error can exist at the single pixel level, the spatial level, and the color level.


Noise is single pixel sensor error during the detection process, introduced either by errant photons, quantum effects (translating a photon into an electron for counting is a probabilistic event on the quantum level), and analog to digital conversion. If subsequent operations will do things such as stretch contrast (histogram equalization) or emphasize darker regions (fill light), then you'd want to reduce noise prior to doing those.


For a completely reduced example of what I mean, take a dark field image (picture with the lens cap on). The result is noise. You can contrast enhance that, or whatever you want, but it's still noise. A perfect noise reduction algorithm should remove all of it, so no contrast can be found to enhance in later steps.



Spatial error can be introduced in a number of ways. When you rotate an image, you introduce spatial errors. If you think of there being a 'true' image (in the platonic ideal sense), the camera records a digital version of that. Even when you use film-- the film grains/crystals are of finite size, and some sampling of the 'true' image will happen. When you rotate a digital image, you introduce aliasing effects. The very sharpest edges will be dulled slightly (unless you rotate to 90 degrees, in which case the grid sampling still holds). To see what I mean, take an image and rotate it by 1 degree increments. The sharp edge will now be (slightly) blurred because of the sampling necessary to do small rotations.


Bayer sampling may just be a spatial sampling error that we have to live with. It's one of the big draws (perhaps the only real draw) to the Foveon sensor. Each pixel has measures the color at that location, rather than getting the other colors from neighboring pixels. I have a dp2, and I must say, the colors are pretty stunning compared to my d300. The usability, not so much.


Compression artifacts are another example of spatial error. Compress an image multiple times (open a jpg, save it to a different location, close, reopen, rinse, repeat) and you'll see what I mean here, especially at 75% compression.


Color space errors are introduced when you move from one color space to the next. If you take a png (losslesss) and move it from one color space to another, then save it. Then go back to the original color space, you'll see some subtle differences where colors in one space didn't map to the other.


When I'm processing photos, then, my order is generally this:



  • noise reduction

  • contrast enhancement, exposures, etc.

  • rotations

  • color space


  • final compression to output image.


And I always save the raw.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...