Monday 23 January 2017

optics - Can the method from the paper "High-Quality Computational Imaging Through Simple Lenses" compete with conventional lenses?


This answer to another question of mine linked to an interesting article: High-Quality Computational Imaging Through Simple Lenses. It proposes the use of simple optics and computational photography techniques to compensate for the different artefacts that arise instead of the complex lens systems we are used to today.


I'm an engineer and do understand the mathematics behind the paper, but I seriously doubt that the designers behind the far more complex commercial lenses haven't given it a thought (and have a strong reason not to implement it). I understand that the nature of the PSF (point spread function) introduces problems at wider apertures, but there are cheaper slower lenses for DSLRs today that could use this technology. If it was a viable alternative it should already exist.


Of course the introduction of these lenses if they could compete with conventional lens systems would kill the manufacturers own market of cheaper conventional lenses, but it would also give them an edge towards the competition. There's also the (what I think is very slim) chance that the designers haven't thought about it. It can also be as simple as that the method doesn't deliver the quality that complex lens systems do.



Has this method any real substance to it and a real world application or is it just wishful thinking from a very academic point of view?


Note that I'm not picking on the scientists behind the paper in any way. New ideas are great and great discoveries are made all the time, but a lot of research never makes it to the industry.




No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...