Sunday 22 April 2018

theory - How can a high resolution camera matter when the output is low resolution?


The question is inspired by this question showing these pictures.

The accepted answer suggests that these pictures were taken by a 8x10 view camera, and the use of a 8x10 camera was confirmed in the comments.


My question is: How can you tell?




When viewed on the webpage these images are 496x620 = 0.37 megapixels (or 720x900 = 0.65 megapixels if you click for "full view").
So any camera with a resolution higher than 0.37 Mpx should be able to capture these pictures, meaning pretty much every smartphone and webcam on the market.


I know about Bayer sensors. But the worst-case effect of a Bayer sensor should be to reduce resolution by a factor of four: If you downscale the picture by a factor of two in each direction, each output pixel will contain data from at least one input sensel for each of the R/G/B channels.
Downscaling by factor 4 still means than any camera with more than 1.5Mpx resolution (rather than the 0.37Mpx of the output) should be able to capture these pictures. We're still talking about pretty much every smartphone and most webcams on the market.


I know about color depth. But JPG, the format we are using to view these pictures, is 8x3=24 bits. And according to the DxOMark scores there are several cameras, including the Sony NEX 7 and Nikon D3200, that are capable of capturing 24 bits of color.
So even if a $10 webcam can't quite capture the nuances in these pictures, a NEX 7 or D3200 should be able to do so.


I know that most lenses have lower resolution than what most sensors are capable of. As an example, the Nikkor 85mm f/1.4G is Nikon's sharpest lens according to DxOMark, and gives a best-case equivalent of 19Mpx resolution on a 24Mpx camera (the full-frame Nikon D3X), while the least-sharp lens has a best-case equivalent of 8Mpx on the same camera.

But the worst lens in their database still gives an order of magnitude more resolution than the output format of these examples.


I know about dynamic range. But these images control the lighting so they neither blow the highlights nor lose the shadows. As long as you are in a position to do that, dynamic range doesn't matter; it will be mapped to the 0-255 output range of JPG anyhow.
In either case, DxOMark says that several cameras with full frame or smaller sensors have a better dynamic range than the best of the medium format cameras.




That's what I know, and there is nothing in these fragments of theory that can tell me how it is possible to tell a 8x10 view camera from a Sony NEX 7 when you view the result as a 0.37 Mpx JPG.


Essentially, as far as I understand, it should be irrelevant how many megapixels and how much color depth the sensor can capture, as long as it's at least as much as the output format can represent.


Still, I don't doubt the judgement of the answer from Stan Rogers. And I've never seen anything similar, in terms of perceived sharpness, from small-sensor cameras.


Have I misunderstood what resolution means?


I guess I'm primarily asking about theory: How can a difference between two resolutions (measured in pixels, lp/mm, color depth or whatever) be visible in a display format that has less resolution that either of the originals?


Or to phrase it differently: Is there anything to stop me, in principle, from replicating these pictures down to the pixel by using a Sony NEX 7 and $10,000 worth of lighting?




Answer



It's all about the micro contrast. Look at the posts about aps-c versus full frame and then extend that difference to medium and large format sensors.


When do the differences between APS-C and full frame sensors matter, and why?


Following the theories on oversampling, it is better to sample at a higher rate and then downsample than to sample at the nyquist limit from the start - ie. if your end goal is 640x480 , it is still better to use a 1280x960 sensor than a 640x480 sensor.


It doesn't matter how many MPixels you have when neighboring pixels depend on each other, anyway, due to the circle of confusion being larger than your pixels on the sensor plane. The lenses have limited ability to resolve, too. Furthermore, you have to consider the lens "sharpness" versus its aperture, and a larger sensor allows you to get closer and get narrower DOF stopped down, which means you can capture even more details - Circle of confusion is larger, lens is working with less diffusion, etc.


And then you have the "depth compression" done by the focal length of the lens that is pretty aggressive in those shots, pointing to a telephoto. The FOV on a small sensor would require you to step back a long way and open up the aperture a lot to get that narrow DOF. However, running the numbers, with a full frame camera you could achieve it, 210mm, 2meters distance, F8 would give a 4 cm DOF and a FOV that takes just the face like those shots.


Put in another way: the larger the sensor relative to the subject, the less the lens has to work on the light rays to compress them into a tight spot. This increases the clarity of the shot and it shows no matter the viewing distance (which is what is being simulated by resizing the image to lower resolution).


Folllowing discussions about detail enhancement and retention through resizing here's a comparison if similar subjects large format vs fullframe and large format versus apsc:


Top: male faces with beard stubs. In the resolution on the site you link to, the beard is rendered with pixelwide hairs, but all that is lost at the same size as Matt's example. Now the beards are diffuse. If we see Matt's image in the same size as the 8x10 photos in the site, we might see a big difference if the head isnt in focus. Even a aps-c system and smaller sensor could produce this result (regarding details).


Bottom: is we compare the female face eyelashes at similar size as it is on the webpage you showed, to a in-focus eye from a aps-c camera, and sharpening is not going to bring the pores in the skin back. We might enhance the perception of the eyelashes at the cost of a bright halo around it.



We now see a huge "overall system" resolution difference, and the apsc camera + the lens used + seen at the given lowres resolution cannot render the same detail as the 8x10 camera + that lens + the viewed resolution could. Hope my point is clearer now.


details


Another comparison to aps-c, beard stubs, after sharpening them. Even though stackexchange resizes them we can still perceive a difference in clarity.


apsc


In conclusion the other factors you are asking about other than the pixel resolution are:



  • Total system resolution lp/mm

  • SNR

  • Magnification from the person to the sensor to the screen you view it on at the given distance you view it from to your eye's projection on the retina resolution - the smaller (below 1:1) magnification any part of the system , the higher the demands for the above two factors, which in turn are negatively influenced by the smaller projection area.



You'll get more details in a downscaled macro shot than you do without shooting macro in the first place.


A final proof that resolution before downscaling matters.Top: 21MP FF Bottom: 15MP Aps-c with the same lens/aperture focal length.


Two diff sensor res/size


Now rescaled to equal resolution:


Small


Smaller


Smallest


Tiny


and applied a bit of sharpening to bring back some detail. What do you see? a bit more details from the 21mp FF camera viewed at the same size/resolution which would be equivalent down to a 3Mp camera. you can't count the lines in the rescaled image, but the perception that they are lines is true. Whether you want this or not is your creative choice, but starting with the higher resolution (given by the total system) you get the choice. If you dont want them you can blur the image before rescaling.


One final experiment to show the difference between a small size, low res vs larger sensor, higher resolution , but rescaled and sharpened to the same resolution, shown at the SAME SIZE in the end - with ALL ELSE EQUAL. Cool, eh? How did I do that? My APS-C camera I take simulate a "crop sensor" (smaller than my apc-c) by cropping an image out of the image. Then I go closer to the subject to fill a 4x larger sensor with the same subject. - like the portraits on large format sensor is basically a macro shot - getting much closer than you'd get with a aps-c camera. Same electronics quality, same lens, same settings, same light.



This is how it looks on the small sensor , let's call it "mini aps-cc":


Small sensor


Here we see the "large format" (large full aps-c ):


aps-c


Here we see loads of details, right? But that doesnt matter after we rescale it down to a 0.016MP image, and sharpen for overall same contrast, does it?


Comparison equal size


But indeed we do! If you still don't believe me, I give up :)


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...