Last time we saw that certain anomalies appeared in the transformed image after the final smoothing attempt. The anomalies disappear when we do not take into account -1 to +1 (and viceversa) transitions of the t parameter and just use the simple smoothing formula tNew = 0.98 * tPrev + 0.02 * tCurrent.
The problem with this attempt is that the resulting image, while being free of discontinuities, becomes very blurry.
For my next attempt I've been testing a smoothing formula which is a weighted sum of previous t values (same for means). The weights of each previous t value grows exponentially as you approach the current line (so the previous line has the largest weight). For example: tNew = (tCurrent + 2 * t(j - 3) + 4 * t(j - 2) + 8 * t(j - 1)) / 15
This method yields slightly better results than the simple smoothing formula with just the previous t value, when the number of previous values used is 4. When I say 'slightly' I mean barely noticeable... the bifr images for 0.2 bpppb look identical, however for 0.15 bpppb this method gives a slightly better smoothing.
You can see a comparison of the 2 cases below:
When considering how to eliminate discontinuities, the variances of differences is clearly a factor, reducing it is important. Also reducing the number of -1 to +1 (and viceversa) jumps is also important. To that end, I have incorporated an online variance computing implementation into the POT code for use in smoothing. The implementation relies on the algorithm found here .
The next thing worth exploring is perhaps a selective smoothing, which might solve both the discontinuities and the blurring problems. The idea is to smooth only component for which the t differences exceed a given threshold or change the variance significantly, while leaving the other values the way the are.