For having nothing better to do I kept experimenting with Cinepak encoder.
I considered implementing some variant of codebook decomposition scheme suggested by Tomas in the comments to the previous post but I’m still not sure if I should bother even if it looks promising. So I tried the old thresholds-based scheme instead.
And what do you know, it speeds things up considerably: my usual test sample gets encoded in 27-35 seconds (depending on thresholds) instead of 44 seconds in the usual mode. But since I don’t know what would be good thresholds I did the opposite and added a refinement mode: after deciding which codebook to use for which block I re-generate codebook using only those blocks that belong to it. Of course it increases processing time, for example that file it takes 75 seconds to encode with refinement—which is still 70% more time but still less than double (for comparison, in full ELBG mode it’s an increase from about 160 seconds to 270 seconds).
So by rough estimate selecting only relevant blocks for codebook generation shaves 20-40% off the encoding time. And splitting data into partitions and generating a codebook by parts made the process about three times faster. I suspect that with a proper approach to clustering vector quantisation can be made two-three times faster but I don’t think I want to experiment with that. I should call it a day and move to something else instead.