Saturday, 30 December 2017

lens - Does wide angle equivalent in crop sensor skew image?


I see many posts on both this forum and elsewhere the discuss the use of a wide angle on a crop sensor. For example, they discuss how at 14mm lens is 14mm on a 35mm film or full frame, but with a 1.6x crop sensor it is an effective 28mm. My question is, is a 14mm on a crop sensor the same image as a 28mm on a full frame. Or does "effective" include some other conotation. I read that a 14mm can create distortion on a full frame and some vignette, so people seem to say using it on a crop takes out some of those edge problems. My confusion is how the lens is actually taking in the extra image. Reading kenrockwells blog I learned that wide angle keeps lines straight, while fish eye actually skews the lines.


So with this as my understanding, I am still a bit confused how a lens that is curved in order to get a greater angle of view, will be the same as a cropped version of a less curved lens. Or does "effective" only refer to the objects to the sides but not to the distortion of the image. Or, if it is different, is it so small that for photography it does not matter? A technically description would be welcome as long as a basic understanding of the situation.



Thanks!


Edit: To be more clear after the comments,


I am specifically talking about the end resulting 2d projection from 3d space. I have read both those previous answers. the first one is closer to what I am talking about. However, it is still confusing for me. I have experience with 3d modeling and projection matricies, which might be confusing me with this. But for example, i dont understand the images having a 50 mm lens being drawn Starting from the sensor, so it shows a different fov for crop and full frame . If the lens is the same, it takes the same amount of information from the world, which makes a ray from a certain degree off the last lens projected into a smaller space, there is no projection from the sensor, so drawing the line from the sensor doesnt make sense to me. The distance of the sensor to the rear of the lens must have some impact on the light projected but is not represented in the images. Further, again from that first answer it says cropping is the same as zoom, but from what i understand from how perspective works, it is different, since a wide angle will project lines differently than a small fov to the same size, so cropping the center of a wide angle and zooming into something is very different. This is easily recreated in an opengl application with varying fov projection matrix, i imagine in painting class people learn this as well. A fisheye would be a good example, since if cropping was the same as zooming, a fish eye lens would keep the center exactly like a normal lens and then a gradient weighted towards the outside would rapidly create a warped perspective, but from what I see, it is even. To me those images just look like its comparing cropped with full frame in regards to orthogonal projections.



Answer



Here's the short answer: a wide angle lens on a crop sensor skews the image exactly in the way it does in the center of the frame on a full-frame sensor. In turn, this means that using a wide angle lens (small focal length) on a crop sensor gives the same perspective distortion as using a narrower lens (larger focal length) on a full frame sensor, with the increase in focal length directly corresponding to the reduction in frame size.


But you don't think this is right, so let's go into more depth. :)


I think this is well-covered by the existing answers, but I think you have some basic misconceptions that are too big to cover in the comments, so I'll try here. Note to other readers: If you are confused about this topic but don't necessarily have exactly the same thought process that this question goes through, I really suggest starting with one of the links I give below (or that are given in comments to the question above). But if you do feel like you are confused by the exact sme things, read on.


First, Does my crop sensor camera actually turn my lenses into a longer focal length? is a good place to start. I know you don't believe it yet, but start from the premise that my answer there is correct, and then we'll work out why.


Next, let's consider wide angle distortion and what causes it. Check out the question What does it really mean that telephoto lenses "flatten" scenes? and particularly the lovely teakettle animation from Wikipedia:


from wikipedia



This is "wide angle distortion" — literally just a matter of perspective. Don't miss that in this example, the camera is moving back to keep the framing the same.


But, lenses often show other kinds of distortion. This is an unrelated problem due to the construction of the lens, where the projected image isn't the rectilinear ideal. See What are Barrel and Pincushion distortion and how are they corrected? This happens to often be particularly prominent in wide angle lenses because wide angle lenses are physically hard to design. That's part of why fisheye lenses exist: they basically give up the ideal of a rectilinear projection and use other projections with names like "equisolid angle". But the important thing is that this is different from "wide angle distortion". The center of that projection might indeed be more natural looking than the edges (see Why doesn't my fisheye adapter gives fisheye distortion on my APS-C DSLR?), but overall it is a red herring.


So, now, it is time to go into What is "angle of view" in photography?. I see that you are concerned that the 2D model I show doesn't represent 3D reality. That is a fair enough worry, but the key point is that this isn't a mapping from 3D to 2D (as a photograph is, or the top part of the animation above). It is simply taking the top view, as in the second part of the animation. This directly corresponds to the 3D situation. (If that doesn't make sense to you, tell me why not and we'll clear it up.)


Another concern you have with the angle of view answer is with the way I've ignored the distance from the back of the lens to the sensor. Modern camera lenses are complicated, composed of many different lens elements, but they do reduce mathematically to the single point model (at least for this question). More at What exactly is focal length when there is also flange focal distance? and What is the reference point that the focal length of a lens is calculated from?


You also say



If the lens is the same, it takes the same amount of information from the world, which makes a ray from a certain degree off the last lens projected into a smaller space, there is no projection from the sensor, so drawing the line from the sensor doesn't make sense to me



I tried to cover this in the angle of view answer, and I'm not sure what wasn't clear, so I will repeat. Every lens has an image circle which is bigger than the sensor (or, more exactly, those that don't show black at the corners and edges — see How does a circular lens produce rectangular shots?). That image circle is the "amount of information" the lens takes from the world. However, the camera only "takes" the part that actually hits the sensor, so the field of view of your actual photo only considers that part. That's why we draw the cone from the edge. If it helps, you can draw some of the other rays as well and consider what happens to them. (In this model, they stay straight lines as they go through the lens.)


You also say My confusion is how the lens is actually taking in the extra image. Well, that's how: the lens is always taking in the same amount, and projecting the same amount, but we are recording a larger or smaller rectangle from it.



Finally, you certainly could set up a demo of this in OpenGL which would show exactly what I'm saying. If you're showing something different, it's because your model is changing something which doesn't correspond to what happens when we change focal lengths or sensor size in a camera. Figure out what that is, and correct it.




Oh, and an addendum: your initial example math has a mistake. A 14mm lens on a 1.6× sensor has a field of view equivalent to a 22.4mm lens (usually just rounded, because real focal lengths aren't that precise) on a full-frame sensor camera. (That's because 14mm × 1.6 = 22.4mm, to spell it out.) That means that for the same framing, you stand in the same place with a 14mm lens on APS-C or a 22mm lens on full-frame, so the perspective is the same. Alternately, if you have a 14mm lens on both cameras, you can stand in the same place and later crop the full-frame result by 1.6× (linearly) and get effectively the same photo.


The 14mm → 28mm example you give would, of course, match a 2× crop sensor, like Micro Four Thirds.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...