Archive for August, 2025

When video DPCM codecs roamed the Earth

Saturday, August 16th, 2025

Back in mid-1990s there was a situation when video on computers was slowly getting in demand while the processing power was still very limited. Old paletted video formats were slowly going away (Smacker still holding strong though) and for hi-colour video RLE was not a good choice in terms of compression and DCT was not a good choice in terms of CPU cycles required. Thus the niche was partially filled by block-based vector quantisation codecs (like Cinepak) and various codecs that compressed the difference between previous and current pixel in some way (hence DPCM name even if it’s more related to audio compression).

So today I’m going to give a review of these codecs and how Motion Pixels fit in.
(more…)

MVI2: done

Thursday, August 14th, 2025

I’m almost done with Motion Pixels at last. Of course I skipped implementing some exotic modes but at least the files I could find play fine and don’t complain about missing modes. I just need to put finishing touches and commit it all, probably on Saturday.

The next post should be dedicated to intricate details of the codec(s) and comparison to its better-known competitors with similar design (Truemotion 1/RT/2/2X and Indeo 2/3) but for now all I need to say that frames may be coded in several modes (RGB or YUV with one chroma sample per 2×1, 2×2, 4×2 or 4×4 block), some parts of it may be use low-resolution delta coding (with its own peculiarities depending on line number and sampling mode); and since that was not enough, they’ve added smooth delta coding mode (which also has its own peculiarities in low-resolution coding mode). And of course there’s single-field coding mode. And some features seem to be duplicated using different flags. Since I’ve not found any samples for most of them, I simply implemented basic modes, 4×4 YUV mode with lowres and all YUV modes with optional lowres and smooth delta coding (since movieCD samples seem to exercise them all).

The best part is that nobody cares. NihAV can’t be interfaced with MPlayer easily, discmaster.textfiles.com is not likely to change anything (e.g. files here are recognised as aviAudio type despite having video track and nihav-encoder being able to decode it just fine. Or BATH06.MOV—no link since it’s the only one in the database—which can be converted with the same tool but it’s not even recognised as QT MOV. So I don’t expect that MVI1/2 files will get a video track either.) And I never was Aware caring about the codec much, not having any content coded with it for starters.

Anyway, with this waste of time is over, so what’s next? While searching for the samples I’ve found a couple other MVI formats that may be good candidates for na_game_tool. There is a lot of janitorial work for NihAV as well (for example, I want to rewrite AVI demuxer—it’s one of the first pieces of code I implemented for the project and now I see that some things could’ve been done differently and better). And I’ve finally decided on a name for a new tool: na_eofdec (NihAV exotic/obscure formats decoder). Now all is left is to RE and implement enough formats for a release of both of those tools.

Don’t expect any of this happening soon though, I am lazy and work on it only when I’m in the mood. For example, this post might’ve been about why wavelet compression for video (and images) inherently sucks—but I still haven’t got in a proper mood for writing it.

MVI2: some news

Friday, August 8th, 2025

First of all, here’s some information for the context: MVI codecs rely on out-of-band flags to signal what capabilities and subsampling they use (the fact that they decided to store those flags in FOURCC is a different annoyance); and despite the potential variety, only couple of flags are used for each codec. For instance, of all MVI1 files I saw only one flag has been in use (golden frame—and it’s only in one game). MVI2 has two distinct sets of flag combinations, 0x804 and 0x200. The former means bog standard MVI coding (with one chroma sample set for 4×4 block) plus one extension feature, the latter means MVI2 version 2 (if that makes any sense) where they decided to make subsampling and features selectable per frame (as well as adding more of them) and moved them to the frame header while at it.

So far I concentrated my efforts on format 0x804 to see what the feature it is. It turned out to be low-resolution deltas, just like Truemotion 2. In this mode every odd pixel is coded as previous pixel plus half of luma delta for the next pixel. I still have to make the reconstruction run properly, but that’s nothing a lot of debugging can’t fix.

This should allow me to decode even some of MovieCD samples (including the one hidden in samples.mplayerhq.hu/drivers32 for some reason) and I’ve seen quite recognizable frames already.

It’s hard to tell what features the other flavour uses but it’s reasonable to assume that it uses lowres coding as well. Hopefully I’ll get to it soon.

Update from Saturday: after dealing with the annoyance of different deltas coding scheme per each line type, I can now decode the few files I could find (including a couple of movieCDs from archive.org) just fine. The second half seems to use an alternative composing/rendering functions and reads maps differently as well. So while it’ll take more time, at least I’m closer to completion.

MVI1: done

Sunday, August 3rd, 2025

In last post I wrote about how I’ve managed to reconstruct a recognizable picture for MVI1 codec. After I fixed the prediction code it started to work properly. Surprisingly, Treasure Quest game proved to be a source of MVI1 files in all formats (RGB, YUV422, YUV420, YUV410 and YUV4½0—the last one has one set of chroma samples per 4×4 block and is the most common MVI format in general). Additionally it has MVI1 samples with golden frame feature (I named it after a feature in a family of competing codecs that started with rather similar coding approach): frame 0 is two intra frames with the second frame serving as the background for the other frames; there is an additional map mode which tells that certain rectangles should be copied from the golden frame (instead of previous frame or filled with one colour). MVI2 seems to have an extension of that mode but I’ll see about it when I get to it (and if I obtain samples using that mode).

So, MVI2 next. Considering the number of extensions they added (and how they interfere with frame reconstruction) it’s probably not going to be easy but now I have a base to extend instead of blind guesses to make.

Motion Pixels: breakthrough!

Friday, August 1st, 2025

As I mentioned last month, I decided to reverse engineer Motion Pixel codecs and after a lot of failed attempts to make decoder work I’ve finally got something.

First of all, here are two frames from different videos.

MVITEST.AVI:

And a frame from SHOT09.AVI (from Apollo 18 game):

As you can see, the images are not perfect but recognizable already. And just last week it would’ve been a mess of colours with some barely recognizable distorted shapes.

The reason for this is that while MVI1 (and MVI2) is indeed based on stand-alone MVI format (I call it MVI0 for clarity), there are some nuances. On the first glance MVI0 and MVI1 are the same—all steps are the same—and indeed you can use the same code to decode data from either, but the reconstruction steps differ significantly.

Essentially there are four steps there: decode rectangles defining which parts of the frame will be left intact or filled with one colour, decode deltas used to reconstruct the rest of pixels, use those deltas to generate predictors for each line (where needed), use the rest of deltas to reconstruct the rest of pixels. Additionally MVI employs chroma subsampling mode so only one pixel in 1×1 to 4×4 block (depending on mode) has delta differences applied to chroma, all other pixels update only luma component. So if you don’t do it correctly you may end up applying e.g. deltas intended for luma to chroma components and vice versa. That’s what I got for a long time and could not understand why.

It turns out that vertical prediction has its pixel sampling at different position—or maybe it’s scrambled in the same way as line prediction. There for the most common mode (one set of chroma components per 4×4 block) each group of four lines is decoded in reverse order (i.e. 3, 2, 1, 0, 7, 6, 5, 4, …). For 2×2 block only lines in pairs are reversed. You can see artefacts of wrong prediction on Apollo frame.

Anyway, having a recognisable picture means that the hardest part (for MVI1) is done, so all is left now is to fix the remaining bugs, refactor the code and move to MVI2. There are other annoying things there but now I know how to deal with them.

BTW, if you’re curious why it takes so long, the problem is the binary specification being obfuscated to the point that Ghidra refuses to decompile most of MVI1 decoder functions and can’t do much about reconstruction functions since they’re a mess of spaghetti code (probably written in assembly language directly) so it’s more of a state machine than a decoding loop. And they abuse segment registers to access different parts of the context (and this must be the reason why it cannot work under OSes from this millennium). I got some progress when I resorted to debugging this mess by running MVI2 player OllyDbg under Win95 (emulated in DosBox-X) and constantly referring to Ghidra to see where to put breakpoint to trace a certain function. That process is definitely not fun for me but it gave results.

Overall, probably it could’ve gone better but I hope the rest won’t take as long.