Wednesday 21 June 2017

depth of field - Technically, why is the out of focus area blurred more when using a bigger aperture?


I'm wondering, technically, why and how does the out of focus areas blur more when using a bigger aperture. I think it'd help a lot if I presented a problem that's been driving me nuts for a long time:



I've read that the f-number of the human eye varies from about f/8.3 in very bright light to about f/2.1 in the dark. But from what I've tested, I always see out-of-focus areas with the same amount of blur.


Which leads me to ask: how does this aperture thing work, why does it create a blur from the technical point of view, and does it also apply to eyes, or is it just a "failure" in the camera lenses we've come to like and never wanted to "fix"?



Answer



I'm going to crib from my answer to an earlier question on aperture:



When the aperture is very small, the admitted light is highly "collimated", which is a fancy way of saying "all the rays are nicely parallel to each other". This results in a sharp focus for all the light that comes in. When the aperture is more open, only the rays which closely match the focus point are collimated — which means that whatever you've focused on is sharp, but farther or closer parts of the scene will be increasingly blurry.



Basically, the smaller the aperture, the more restricted-to-exactly-in-focus the light is. A bigger aperture lets in more light, but the "price" is that it's less controlled.


The following diagram from Wikimedia may help:


File by wikipedia user Chabacano, licensed CC-BY-SA 3.0



On the left, the wide aperture results in only the center, focused ♡ card rendered sharply. The more-narrow aperture on the right excludes the less-collimated light from the out-of-focus ♠ and ♣ cards, resulting in a sharper image overall.


Remember, the red/green/blue dotted lines in the diagram trace the outside of a cone of light rays. The more-focused light is also included in the image made with the wider aperture on the left, but the image sensor (or film) can't tell which was which, so the result is more blur except for the rays which happen to be precisely at the focal point.


This surely happens with the human eye as a lens as well. I think it's just really hard to control your experiment, since you can't actually snap a picture to compare side by side. In the time between evening and midday — or even in the half hour it takes your eyes to acclimate to a dark room — you lose the perfect memory of how much blur there was. This is further complicated by the fact that your brain is working very hard to correct all defects from the eyes and present a mental model of the entire world in perfect focus. (That's what the brain part of the human vision system does.)


It's very hard to look at just one spot; your eye flicks around subconsciously, and builds a perfect image from one which is really only sharp in the center. This adds another huge complication — not only is the lens of the eye a relatively simple system with a lot of aberrations, the sensor is irregular. Or rather, it's highly specialized. The central area is called the fovea, and that's only about 1mm in diameter — and the most sharp part, the foveola, is only 0.2mm. That's where really sharp vision comes from. But this area doesn't contain any rods (the cells sensitive to dim light), so this sharp area is not involved at all when you're in dim light. This makes a simple comparison with camera systems basically impossible.


On top of that, there's another flaw in your basic assumptions — the idea that the human eye sees the same amount of motion blur no matter the amount of light. Actually, the input is actually integrated over time, and the amount of time does increase in lower light levels. And, "exposure" is actually controlled in another way: the sensitivity is boosted in the darkness — the effective equivalent of auto-ISO.


So, to get to the direct question: it's the nature of optics, and so it also applies to our eyes. But our eyes are a different kind of system than a camera and lens. The human vision system features a simple lens, a complicated sensor, very complicated instantaneous post-processing, and an incredibly complicated storage and retrieval system. A camera generally uses a sophisticated lens, a comparatively straightforward sensor matrix, and comparatively straightforward post-processing (until computational photography comes into its own — whether Lytro succeeds this year or someone else five years from now). And the memory system is bit-for-bit perfect — not like human memory in the least.


Whether this difference is something we "like" and don't want to fix is a matter of interpretation. Certainly the idea of depth of field is in our artistic/visual vocabulary as a society; whether it will stay that way in a hundred years is a matter of speculation. My guess is yes, even as technology changes.


A camera with a different type of sensor, like that used in the Lytro can actually record the direction of the incoming rays of light. This additional data allows these cameras to create an entirely-sharp image even with a very large aperture. But that's not how the Lytro company is selling it: instead, their gimmick is images where you can click to change the calculated point of focus on the fly. That they chose this route rather than the all-


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...