Sunday 16 April 2017

image processing - Are deconvolution filters better than unsharp mask for correcting out-of-focus photographs?


What is the best method for enhancing image quality if a photograph happens to be out of focus?


I'm using now unsharp mask but I'm looking for new methods. Perhaps there is one which has better results than unsharp mask. I heard about deconvolution filters, but I haven't used them in photography.



Answer



Deconvolution can in principle allow you to reverse the unsharpness, but this works best when you have low noise images and you can extract the so-called point spread function accurately. Your camera settings caused a blur because a point in the scene affects not just one pixel but a group of pixels. The profile of the gray values is called the point spread function. Given the point spread function the problem of reversing the unsharpness is mathematically well defined, but it doesn't have a unique solution when noise is added to the image.


There are many algorithms allowing you to reconstruct the most likely image. I use the free of charge ImageJ program with the Parallel Iterative Deconvolution plugin, and the DeconvolutionLab plugin.


With these programs installed, you must have th point spread function as an image file. You must do all the work in a linear colorspace and decompose all the color channels as separate image files.


But all that is quite routine straightforward work, obtaining the point spread function is the non-trivial part that requires the most work. If your image happens to contain what should be a point like object (e.g. a star), then you can use that object as your point spread function. But suppose that this isn't the case. Then the best you can do is look for high contrast edges in your image that by your judgment should be an abrupt transition which is smeared out over several pixels purely due to unsharpness. If there are several such edges moving in different directions, then the point spread function becomes well defined. But if we make the simple assumption of an isotropic point spread function, then a singe edge suffices and the math simplifies considerably.


Suppose then you take your image, transform to linear colorspace, and zoom in into some sharp edge. By zooming in sufficiently the curvature will almost disappear. If you select an edge where the brightness doesn't change rapidly on either side then when zooming in near the edge the brightness on either side will tend to some constant value away from each edge. But there are now statistical fluctuations due to the noise.


So, what you have is one region where the average brightness is v1 some distance away from the edge and in the other region it is v2. The near the edge there is a gradual transition from v1 to v2, which in case of an isotropic point spread function P(r) that only depends on the distance r to the central pixel is given by:



v(d) = v1 + 2(v2-v1) Integral from d to infinity of arccos(d/r) r P(r) dr


where d is is the distance from the edge into region 1 where we take d to be negative to indicate moving into region 2.


Then it's convenient to use ImageJ's math macro to transform the gray values to g(d) defined as:


g(d) = (v(d) -v1)/(2*(v2-v1))


So, if v1 = 100 and v2 = 30, you just write the macro as


v1 = 100; v2 = 30; v = (v -v1)/(2*(v2-v1))


Then the relation between g(d) and the point spread fucntion can be expressed as:


g'(d) = -integral from d to infinity of r/sqrt(r^2-d^2)p(r) dr


This integral equation for P(r) can be inverted to yield:


P(r) = 2/pi Integral from r to infinity of g''(s)/sqrt(s^2-r^2)ds



Which can be rewritten by substituting s = r cosh(t) as:


P(r) = 2/pi Integral from 0 to infinity of g''[r cosh(t)]dt


What I find convenient to do is to transform the image displaying the function g by writing it as:


g(d) = 1/2/[1+exp(f(x)]


So, I put f = log[1/(2*g) - 1]


using the ImageJ math macro.


Then a linear fit of the form f(d) = p d will already work quite well it, as this will already correctly capture the value at zero and the asymptotic behavior at plus and minus infinity. But doing this is the first step to get to better approximations. The problem is then how to make such a fit given that we have some image displaying f as gray values and you have some edge running across the image approximately according to a straight line.


You don't want to waste a lot of time trying to fiddle with measuring distances to the line that isn't very sharply defined in the first place. Instead, you make a linear fit in terms of the image coordinates x, and y:


f(x,y) = a + b x + c y


Calculating a, b and c requires you to calculate the summations of f, x, y, f*x, f*y, x^2, y^2 and x*y, which you can easily do with ImageJ's measurements facilities. Then the fit of f as a function of a distance d to the line of the form f(d) = p d follows from this, you have that p = sqrt(b^2 + c^2). Moreover, you can make an image that has as its gray values the distance to the line, which is simply f/p. That "distance map image" is useful to make a higher order fit of the form f(d) = p1*d + p3*d^3 + p5*d^5+...



Having obtained an accurate functional representation of f(d), it's straightforward to numerically evaluate the integral that yields the point spread function.


No comments:

Post a Comment

Why is the front element of a telephoto lens larger than a wide angle lens?

A wide angle lens has a wide angle of view, therefore it would make sense that the front of the lens would also be wide. A telephoto lens ha...