Monday 23 February 2015

Does IBIS reduce image resolution? How does it compare to lens based IS?


I heard a lot about in-body image stabilization recently and I wonder how this technology works. I don't mean the mechanical part (I understand how the sensor moves along different axes to compensate), but the technique in principle:


When I take my 5D III and a lens like the 70-200 2.8 II IS USM with activated stabilization, the IS will constantly move one glas element within the lens in such a way that it directs the cone of light exactly onto the sensor, so that you can always use the full sensor area.


When the cone of light goes through an unstabilized lens and you need to move the sensor - how can it capture the same amount of light at all? The light enters the lens down a fixed axis right through the center of the lens. When the sensor currently moves down, for example, to compensate for a shake, I can't see how it then is still able to capture the full resolution? When a part of the sensor is below default/calm position, then there must be missing the same part on the opposite site?


There's only one way I could image IBIS to work without resolution reduction: if the sensor area was significantly bigger than the area exposed to light, so that there's still sensor available when it currently moves out of its initial position. But wouldn't this (partly) be the same as digital stabilization because the final image gets calculated from the full sensor area minus the not exposed area?


I really would like to understand this and it would be great if someone could "shed some light on this".



Answer





There's only one way I could image IBIS to work without resolution reduction: if the sensor area was significantly bigger than the area exposed to light, so that there's still sensor available when it currently moves out of its initial position.



It's actually exactly the opposite. The image circle — the result of that cone of light hitting the imaging plane — is larger than the sensor. It has to be, first because that's the only way to cut a full rectangle out of a circle, but also because the edges of the circle aren't clear cut ­— it's kind of an ugly fade-out with a mess of artifacts. So, the circle covers more than just the minimum.


It's true that for IBIS to be effective, this minimum needs to be a bit larger. To give a concrete example: a full frame sensor is 36×24mm, which means a diagonal of about 43.3mm. That means the very minimum circle without moving the sensor needs to be at least 43.3mm in diameter. The Pentax K-1 has sensor-shift image stabilization, allowing movements of up to 1.5mm in any direction — so, the sensor can be within a space of 36+1.5+1.5 by 24+1.5+1.5, or 39×27mm. That means the minimum image circle diameter to avoid problems is 47.4mm — a little bigger, but not dramatically so.


But then, the resolution of the sensor cut from the circle is still the same. It's just shifted by a bit.


It's actually pretty easy to find some examples which demonstrate the image circle concept, because sometimes people use lenses designed for smaller sensors on cameras with larger sensors, which results in less-than-entire-frame coverage. Here's an example from this site... don't pay too much attention to image quality, as this is clearly a test shot taken through a glass window (with a window screen, even). But it illustrates the concept:


Image by Raj, from https://photo.stackexchange.com/questions/24755/why-does-my-nikkor-12-24mm-lens-vignette-on-my-nikon-d800#


You can see the round circle projected by the lens. It's cut off at the top because the sensor is wider than it is tall. This sensor measures (about) 36×24mm, but the lens is designed for a smaller 24×16mm sensor, so we get this effect.


If we take the original and draw a red box outlining the size of the smaller "correct" sensor, we see:


with frame



So, if the lens were taken on the "correct" camera, the whole image would have been that area inside the box:


Image by Raj, cropped


You've probably heard of "crop factor". This is literally that.


Now, if IBIS needs to move the sensor quite a lot (here, the same relative amount as that 1.5mm travel limit on Pentax full frame), you might see this, with the lighter red line representing the original position and the new one the shift. You can see that although the corner is getting close, it's still within the circle:


shift


resulting in this image:


shift and crop


Actually, if you look at the very extreme bottom right corner, there's a little bit of shading that shouldn't be there — this contrived example goes a bit too far. In the extreme case of a lens which is designed to push the edges of the minimum (to save cost, weight, size, etc.), when the IBIS system needs to do the most extreme shift, it's actually possible to see increased artifacts like this in the affected corners of the image. But, that's a rare edge case in real life.


As Michael Clark notes, it's generally true that image quality falls off near the edge of the lens, and if you're going for maximum resolution (in the sense of captured detail), shifting off center can impact that. But in terms of pixels captured, the count is identical.


In addition to the centering issue, this can also affect composition: if you are trying to be very careful about including or excluding something from one edge of the frame, but aren't holding still, you could be something like 5% off of where you thought you were. But, of course, if you're not holding still, you might get that just from movement.



In fact, Pentax (at least) actually uses this to offer a novel feature: you can use a setting to intentionally shift the sensor, allowing different composition (the same as a small amount of shift from a bellows camera or a tilt-shift lens). This can be particularly useful with architectural photography. (See this in action in this video.)


Also: it's worth thinking about what's going on over the course of the exposure. The goal is to reduce blur, right? If the camera is perfectly still (and assuming perfect focus, of course), every light source in the image goes to one place, resulting in perfectly sharp drawing of that source in your image. But let's examine a fairly long half-second shutter speed during which the camera moves. Then you get something like this:



… where the movement of the camera during the exposure has made it draw a lines instead of points. To compensate for this, image stabilization, whether in-lens or sensor-shift, doesn't just jump to a new location. It (as best it can) follows that possibly-erratic movement as the shutter is open. For video, you can do software-based correction for this by comparing differences frame to frame. For a single photographic exposure, there's no such thing, so it can't work like in your quote at the top of this answer. Instead, you need a sophisticated mechanical solution.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...