Further Cinepak experiments

For having nothing better to do I kept experimenting with Cinepak encoder.

I considered implementing some variant of codebook decomposition scheme suggested by Tomas in the comments to the previous post but I’m still not sure if I should bother even if it looks promising. So I tried the old thresholds-based scheme instead.

And what do you know, it speeds things up considerably: my usual test sample gets encoded in 27-35 seconds (depending on thresholds) instead of 44 seconds in the usual mode. But since I don’t know what would be good thresholds I did the opposite and added a refinement mode: after deciding which codebook to use for which block I re-generate codebook using only those blocks that belong to it. Of course it increases processing time, for example that file it takes 75 seconds to encode with refinement—which is still 70% more time but still less than double (for comparison, in full ELBG mode it’s an increase from about 160 seconds to 270 seconds).

So by rough estimate selecting only relevant blocks for codebook generation shaves 20-40% off the encoding time. And splitting data into partitions and generating a codebook by parts made the process about three times faster. I suspect that with a proper approach to clustering vector quantisation can be made two-three times faster but I don’t think I want to experiment with that. I should call it a day and move to something else instead.

2 Responses to “Further Cinepak experiments”

  1. Neato. I had a look at the ELBG code in lavc and a couple of ways to speed it up come to mind:

    * pre-cluster points, perhaps using a threshold derived from DELTA_ERR_MAX
    * use a smarter data structure to speed up lookup of nearby points
    * multithreading
    * allow tuning DELTA_ERR_MAX

    Another way to do VQ quickly could be recursive linear discriminant analysis. Unlike median cut it is invariant to affine transformations, so no need to be clever with avg(Y) etc..

  2. Kostya says:

    Personally I perform RLE (sorting the points first) and operating on (value, count) pairs afterwards. Smoothing should be able to reduce the number of points even further but I still believe the real gain can be obtained from smarter point classification.

    Anyway, I think I’ve played with it enough and there are other things to try.