Tuesday, September 11, 2012

SOCIS - part 5

Continuing from part 4 we discuss about smoothing and the methods employed to achieve this.

Last time we saw that certain anomalies appeared in the transformed image after the final smoothing attempt. The anomalies disappear when we do not take into account -1 to +1 (and viceversa) transitions of the t parameter and just use the simple smoothing formula tNew = 0.98 * tPrev + 0.02 * tCurrent.
The problem with this attempt is that the resulting image, while being free of discontinuities, becomes very blurry.

For my next attempt I've been testing a smoothing formula which is a weighted sum of previous t values (same for means). The weights of each previous t value grows exponentially as you approach the current line (so the previous line has the largest weight). For example: tNew = (tCurrent + 2 * t(j - 3) + 4 * t(j - 2) + 8 * t(j - 1)) / 15
This method yields slightly better results than the simple smoothing formula with just the previous t value, when the number of previous values used is 4. When I say 'slightly' I mean barely noticeable... the bifr images for 0.2 bpppb look identical, however for 0.15 bpppb this method gives a slightly better smoothing. 
You can see a comparison of the 2 cases below:

                                                        
I've compared the metrics of this case with those from the simple smoothing formula and what I've noticed is that the variances and -1 to +1 jumps are roughly the same, while metric 3 is much larger in the weighted sum implementation, so for lossy compression it doesn't have much influence. I've also tested for 2, 3 and 4 previous values and what I've noticed is that metrics 1 and 2 decrease and discontinuities disappear. However, the image becomes blurry so it's clear that this method is not good enough (not to mention the memory requirements associated with retaining previous values).


When considering how to eliminate discontinuities, the variances of differences is clearly a factor, reducing it is important. Also reducing the number of -1 to +1 (and viceversa) jumps is also important. To that end, I have incorporated an online variance computing implementation into the POT code for use in smoothing. The implementation relies on the algorithm found here [1]. 

The next thing worth exploring is perhaps a selective smoothing, which might solve both the discontinuities and the blurring problems. The idea is to smooth only component for which the t differences exceed a given threshold or change the variance significantly, while leaving the other values the way the are.


No comments:

Post a Comment