I've tried and tried to somehow mix average smoothing and simple smoothing but I either end up with discontinuities or reducing it to just simple smoothing. The problem is that after a reset, all information about previous lines is gone and the average smoothing would have to start over as if it were a new image. I tried to replace the tCurrent values from the average computation with 0.984 * tPrev + 0.016 * tCurrent, but results are pretty much the same. I also tried the following: tNew obtained from average smoothing, if tNew differs from tPrev by more than 10%, smooth this tNew using simple smoothing (with resets after N lines of course). This has shown slightly better results but discontinuities still appear, and it's blurry, like the simple smoothing approach. We see this here:
Still exploring ways of getting a better result after taking resets into account. My mentor suggested a reset on the covariance by "deprecating" the old values to weigh less than the new one. I shall try this approach. I was also thinking about maybe saving the averages each time on disk. It doesn't add to the overall complexity, since it's a simple manner of writing a line of averages after each line in the image, but it does add to running time since IO operations are time consuming.
My mentor has updated the POT so that the discontinuity on the t variable is moved away from the initial -1/1 point to -sqrt(2)/2. This, in combination with average smoothing should yield better results, though probably not noticeable on this image.
Other than that, there have been some implementation issues with the lossless case. The idea was to add a multiplication by s (which is either -1 or 1) to the lifting network for the second component (see the article for POT, mentioned in my first post). There have been some problems with this but hopefully I will solve them and explain in more detail in my next post.