Friday 20 March 2015

sensor - Why do cameras use a single exposure rather than integrating across many very quick reads?


I have never understood why cameras need a shutter with a specific speed and why this cannot be adjusted in postprocessing. I think that present sensor works in an integral way: they save the amount of light reaching them during all the time the shutter is open. But why can't they work in a differential way?


In my mind I have this idea: set the shutter speed to be open for a long time, more that the one you need... for example in daylight set it to 1 second, press your button, the shutter opens and the sensor starts to record, but in a differential way: it would save the amount of light reaching it every 0.001 second for 1 second. In this way I have more information, actually I have 1000 frames recorded in 1 second and in postprocessing I can choose to integrate only the first ten, to simulate a shot with 0.01 second, or the first hundred, to simulate a shot with a 0.1 second exposure


Using either sophisticated processing or by manually selecting areas, I could even decide to use a different exposure for different parts of the final image, for example an exposure of 0.1 second for the ground and 0.03 for the sky, using 100 frames for the sky and 30 frames for the sky.


Does it make sense? Why don't cameras work in this way?




No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...