Wednesday, 4 April 2018

Real-time vs post-production blurs — what's the difference?


I've always been a huge fan of blurry photos, I guess in part because almost any camera is able to take blurry photos, though some better than others.


What's the difference between real-time and post-production blurs?



Answer



In general: "real" blur, either due to optical characteristics (including depth of field, chromatic aberation, spherical aberation, and more) or due to movement, is based on more information. It includes the three-dimensional and time aspects of the scene, and the different reflection and refraction of different wavelengths of light.


In post-processing, there's only a flat, projected rendering to work with. Smart algorithms can try to figure out what was going on and simulate the effect, but they're always at a disadvantage. It's hard to know if something is small because it's far away or because it's just tiny to start with, or if something was moving or just naturally fuzzy — or which direction and how quickly. If you're directing the blur process by hand as an artistic work, you'll get better results because you can apply your own knowledge and scene recognition engine (in, you know, your brain), but even then, it's a lot of work and you'll need to approximate distance and differing motion for different objects in the scene — or intentionally start with a photograph where these things are simple.


In the World of Tomorrow, cameras will gather much more information in both time and space. The current Lytro camera is a toy preview of this. With a better 3D model, the effects of different optical configurations can be better simulated — and of course motion blur can be constructed from a recording over time.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...