Archive for the ‘Various Video Codecs’ Category

A quick glance at another bunch of codecs

Saturday, August 23rd, 2025

Since I can’t do anything but look at various codecs, I did exactly that. So here are some details about codecs nobody cares about.

First, I looked at a video codec used in videos (movies and TV series) for certain hand-held console. Despite it coming from Majesco, video data start with VXGB magic, reminding of a certain other codec for a slightly newer hand-held console with its data starting with VXDS. Structurally it’s very close to it as well, being simplified H.264 rip-off. My REing efforts were thwarted by the binary specification organisation: while code is supposed to reside in data segment, it constantly calls functions from fast RAM area with no apparent place where they are initialised. I expect it to be some of that code being duplicated there for performance reasons but I haven’t found the place where that copying is performed. Oh well, nobody cares about the format anyway, why should I be an exception?

Then, there’s a whole family of Pixar codecs. The Toy Story game made by them relied on a bunch of QuickTime codecs made by them. There are decoders provided for pix0pix7 and pixA codecs while the game itself seems to have content only in pix0, pix3, pix4, pix5, pix7 and pixA formats. The binary specification causes Ghidra decompilation failures (mostly in the functions responsible for the decoding) so I could figure something out and something is left as an exercise to a masochist the reader.

All codecs are paletted and most of them operate on 4×4 tiles. Pixar codecs 0 and 4 are actually raw format (with data re-arranged into 4×4 tiles). Codecs 3 and 5 are similar, they maintain a list of tiles (transmitted in the beginning of the frame; frame can update some tiles in the list) and image data is coded as a series of opcodes meaning “draw tile number N”, “leave next N tiles unchanged” or “restore next N tiles from the background image” (that image is stored in some other file, likely compressed with codec 0 or 4). Codec 7 seems to employ static Huffman coding (and I don’t know much beside that fact). Codec A looks like some kind of RLE but I may be wrong.

P.S. I also started some code re-organisation and improvement. For example, I finally got rid of ByteReader/ByteWriter wrappers over I/O objects so it’s less boilerplate code—but unfortunately I’ll need to convert the existing codebase to the new way. I’ve done that for main NihAV repositories but na_game_tool is not yet updated. And I fear I’ll need to waste some time fixing and extending my MPEG-4 ASP decoder (so it can play all videos from my collection). All this leaves not so much time for researching (very) old codecs.

When video DPCM codecs roamed the Earth

Saturday, August 16th, 2025

Back in mid-1990s there was a situation when video on computers was slowly getting in demand while the processing power was still very limited. Old paletted video formats were slowly going away (Smacker still holding strong though) and for hi-colour video RLE was not a good choice in terms of compression and DCT was not a good choice in terms of CPU cycles required. Thus the niche was partially filled by block-based vector quantisation codecs (like Cinepak) and various codecs that compressed the difference between previous and current pixel in some way (hence DPCM name even if it’s more related to audio compression).

So today I’m going to give a review of these codecs and how Motion Pixels fit in.
(more…)

MVI2: done

Thursday, August 14th, 2025

I’m almost done with Motion Pixels at last. Of course I skipped implementing some exotic modes but at least the files I could find play fine and don’t complain about missing modes. I just need to put finishing touches and commit it all, probably on Saturday.

The next post should be dedicated to intricate details of the codec(s) and comparison to its better-known competitors with similar design (Truemotion 1/RT/2/2X and Indeo 2/3) but for now all I need to say that frames may be coded in several modes (RGB or YUV with one chroma sample per 2×1, 2×2, 4×2 or 4×4 block), some parts of it may be use low-resolution delta coding (with its own peculiarities depending on line number and sampling mode); and since that was not enough, they’ve added smooth delta coding mode (which also has its own peculiarities in low-resolution coding mode). And of course there’s single-field coding mode. And some features seem to be duplicated using different flags. Since I’ve not found any samples for most of them, I simply implemented basic modes, 4×4 YUV mode with lowres and all YUV modes with optional lowres and smooth delta coding (since movieCD samples seem to exercise them all).

The best part is that nobody cares. NihAV can’t be interfaced with MPlayer easily, discmaster.textfiles.com is not likely to change anything (e.g. files here are recognised as aviAudio type despite having video track and nihav-encoder being able to decode it just fine. Or BATH06.MOV—no link since it’s the only one in the database—which can be converted with the same tool but it’s not even recognised as QT MOV. So I don’t expect that MVI1/2 files will get a video track either.) And I never was Aware caring about the codec much, not having any content coded with it for starters.

Anyway, with this waste of time is over, so what’s next? While searching for the samples I’ve found a couple other MVI formats that may be good candidates for na_game_tool. There is a lot of janitorial work for NihAV as well (for example, I want to rewrite AVI demuxer—it’s one of the first pieces of code I implemented for the project and now I see that some things could’ve been done differently and better). And I’ve finally decided on a name for a new tool: na_eofdec (NihAV exotic/obscure formats decoder). Now all is left is to RE and implement enough formats for a release of both of those tools.

Don’t expect any of this happening soon though, I am lazy and work on it only when I’m in the mood. For example, this post might’ve been about why wavelet compression for video (and images) inherently sucks—but I still haven’t got in a proper mood for writing it.

MVI2: some news

Friday, August 8th, 2025

First of all, here’s some information for the context: MVI codecs rely on out-of-band flags to signal what capabilities and subsampling they use (the fact that they decided to store those flags in FOURCC is a different annoyance); and despite the potential variety, only couple of flags are used for each codec. For instance, of all MVI1 files I saw only one flag has been in use (golden frame—and it’s only in one game). MVI2 has two distinct sets of flag combinations, 0x804 and 0x200. The former means bog standard MVI coding (with one chroma sample set for 4×4 block) plus one extension feature, the latter means MVI2 version 2 (if that makes any sense) where they decided to make subsampling and features selectable per frame (as well as adding more of them) and moved them to the frame header while at it.

So far I concentrated my efforts on format 0x804 to see what the feature it is. It turned out to be low-resolution deltas, just like Truemotion 2. In this mode every odd pixel is coded as previous pixel plus half of luma delta for the next pixel. I still have to make the reconstruction run properly, but that’s nothing a lot of debugging can’t fix.

This should allow me to decode even some of MovieCD samples (including the one hidden in samples.mplayerhq.hu/drivers32 for some reason) and I’ve seen quite recognizable frames already.

It’s hard to tell what features the other flavour uses but it’s reasonable to assume that it uses lowres coding as well. Hopefully I’ll get to it soon.

Update from Saturday: after dealing with the annoyance of different deltas coding scheme per each line type, I can now decode the few files I could find (including a couple of movieCDs from archive.org) just fine. The second half seems to use an alternative composing/rendering functions and reads maps differently as well. So while it’ll take more time, at least I’m closer to completion.

MVI1: done

Sunday, August 3rd, 2025

In last post I wrote about how I’ve managed to reconstruct a recognizable picture for MVI1 codec. After I fixed the prediction code it started to work properly. Surprisingly, Treasure Quest game proved to be a source of MVI1 files in all formats (RGB, YUV422, YUV420, YUV410 and YUV4½0—the last one has one set of chroma samples per 4×4 block and is the most common MVI format in general). Additionally it has MVI1 samples with golden frame feature (I named it after a feature in a family of competing codecs that started with rather similar coding approach): frame 0 is two intra frames with the second frame serving as the background for the other frames; there is an additional map mode which tells that certain rectangles should be copied from the golden frame (instead of previous frame or filled with one colour). MVI2 seems to have an extension of that mode but I’ll see about it when I get to it (and if I obtain samples using that mode).

So, MVI2 next. Considering the number of extensions they added (and how they interfere with frame reconstruction) it’s probably not going to be easy but now I have a base to extend instead of blind guesses to make.

Motion Pixels: breakthrough!

Friday, August 1st, 2025

As I mentioned last month, I decided to reverse engineer Motion Pixel codecs and after a lot of failed attempts to make decoder work I’ve finally got something.

First of all, here are two frames from different videos.

MVITEST.AVI:

And a frame from SHOT09.AVI (from Apollo 18 game):

As you can see, the images are not perfect but recognizable already. And just last week it would’ve been a mess of colours with some barely recognizable distorted shapes.

The reason for this is that while MVI1 (and MVI2) is indeed based on stand-alone MVI format (I call it MVI0 for clarity), there are some nuances. On the first glance MVI0 and MVI1 are the same—all steps are the same—and indeed you can use the same code to decode data from either, but the reconstruction steps differ significantly.

Essentially there are four steps there: decode rectangles defining which parts of the frame will be left intact or filled with one colour, decode deltas used to reconstruct the rest of pixels, use those deltas to generate predictors for each line (where needed), use the rest of deltas to reconstruct the rest of pixels. Additionally MVI employs chroma subsampling mode so only one pixel in 1×1 to 4×4 block (depending on mode) has delta differences applied to chroma, all other pixels update only luma component. So if you don’t do it correctly you may end up applying e.g. deltas intended for luma to chroma components and vice versa. That’s what I got for a long time and could not understand why.

It turns out that vertical prediction has its pixel sampling at different position—or maybe it’s scrambled in the same way as line prediction. There for the most common mode (one set of chroma components per 4×4 block) each group of four lines is decoded in reverse order (i.e. 3, 2, 1, 0, 7, 6, 5, 4, …). For 2×2 block only lines in pairs are reversed. You can see artefacts of wrong prediction on Apollo frame.

Anyway, having a recognisable picture means that the hardest part (for MVI1) is done, so all is left now is to fix the remaining bugs, refactor the code and move to MVI2. There are other annoying things there but now I know how to deal with them.

BTW, if you’re curious why it takes so long, the problem is the binary specification being obfuscated to the point that Ghidra refuses to decompile most of MVI1 decoder functions and can’t do much about reconstruction functions since they’re a mess of spaghetti code (probably written in assembly language directly) so it’s more of a state machine than a decoding loop. And they abuse segment registers to access different parts of the context (and this must be the reason why it cannot work under OSes from this millennium). I got some progress when I resorted to debugging this mess by running MVI2 player OllyDbg under Win95 (emulated in DosBox-X) and constantly referring to Ghidra to see where to put breakpoint to trace a certain function. That process is definitely not fun for me but it gave results.

Overall, probably it could’ve gone better but I hope the rest won’t take as long.

Random NihAV news

Thursday, July 24th, 2025

Since I have not tweaked any weights and have not made any releases, I’ll just write about some stuff I’ve been working on but have not released yet. Meanwhile librempeg got support for a bunch new formats too so its changelog may be a more interesting read. Anyway, this post is about what I have (and haven’t) done.

First of all, I’ve finally fixed an annoying problem with VA-API decoding on one of my laptops. Counterintuitively, it turned out to be faster to request hardware to convert native surface into some other format (NV12 into YUV420) and then use it instead. This made decoder CPU usage drop under 10% at last. Probably it can be optimised further to reduce load on graphics subsystem but I’d rather not mess with OpenGL unless it’s really really really needed.

Then I expended support for two formats in na_game_tool. VDX (used in The 7th Guest) had a different format version for the game demo. It still employs two-colour VQ but data for intra frames is split into separate parts for masks and colours, and inter frames code updates to masks and/or colours for each block instead of independent decoding. Additionally thanks to Discmaster I’ve located DPEG version 2 which employs completely different algorithm from the version 3 (painting 4×4/2×2/1×1 squares for intra and skip/update for inter).

I’ve also discovered some new interesting formats like Lantern MOV (which codes DIB changes using 16-bit RLE and there’s a probably related older version in IFF instead of RIFF). I’m considering making a sister project to na_game_tool to decode various formats like this one, formats coming from Amiga, recording formats and such—for all the formats that I’d like to try decoding but don’t want in main NihAV. I’ll write about it when I actually have something to write about (i.e. when I have a name and enough formats for 0.1.0 release). Another curious find was fractal video codec—not the ClearVideo but something with fourcc FVF1 from Images Incorporated. Who knows, it may be interesting to RE.

And finally here’s what I really wasted too much time on: Motion Pixels decoders. It has rather annoying binary specification (like using segment registers to address decoder context variables) that decompilers refuse to translate and from I heard it’s impossible to run on anything newer than Windows 95 or NT4. Nevertheless the formats pose some interest.

From what I saw long time ago, MVI2 is MVI1 with certain extensions, and MVI1 is surprisingly close in the structure to MVI in its own format files—and Gregory Montoir has reverse engineered it long time ago.

So I started by reimplementing that MVI decoder (since I can debug its behaviour against known working implementation) while trying to understand what it does. I got it more or less working (reconstruction is still not perfect but at least it’s recognizable) and my decoder supports other files (found with Discmaster of course) that trigger demuxer bugs or have different subsampling modes.

Then I moved to implementing MVI1 decoder applying the differences found in the binary specification. While it still does not handle decoding properly (both the pictures are garbled and I don’t use all deltas stored in the frame), at least it proves I’m on the right way. Hopefully it’ll decode properly soon and then I can add MVI2 features. Of course it’s a useless format nobody cares about, but apparently I do.

NihAV: now with TealMovie support

Wednesday, June 11th, 2025

Back in the day I looked at the format and recently, to distract myself from game formats, I decided that it might be a good not the worst idea to implement decoding it.

And in the course of doing that I discovered some things that make it even more peculiar. For starters, it flips every second sample in its ADPCM coding. I don’t know if it improves compression in this particular case or it was done just to be different. Similarly split sub- or sub-sub-blocks are coded in コ-order instead of more traditional zigzag order.

But there are more interesting things about it. For starters, the file is organised into blocks instead of frames. First block always contains metadata (streams parameters, title, creator and such), next blocks contain one or more video frames (which you have to decode one after another; I implemented frame parsing for finding out frame boundaries but that’s inelegant solution), and last blocks are used to store audio. This means demuxer either has to demux audio frames after all video frames are sent or jump places in order to maintain synchronisation. Since this is not na_game_tool, I picked the former. The samples are short, so it’s easier to decode them to AVI+WAV and remux properly (or decode both streams to AVI and make AVI demuxer handle unsynchronised streams better—but that’s a task for another day).

Another surprising thing is that there is 16-bit RGB support, done in a very peculiar way. Frame decoding remains the same, except that now frame data is actually a pseudo-YUV frame with two chroma planes following the luma plane. And of course the conversion is done using one of two tables (depending on file version) using the formula yuv2rgbtab[(u + v) * 128 + y]. I guess it’s coding luma, colour difference and colour difference difference here.

And finally, intra frames in TealMovie are stored raw. But when frame width exceeds 160, it is stored half-size.

That’s why I’m looking at those old formats: there’s significantly more variety there in employed coding methods and storage format nuances. Not all of them make much sense but sometimes they’re entertaining or so original that it makes you wonder why such approaches got no further development.

P.S. Maybe I should take another look at the handheld console video formats.

P.P.S. But I think I’ll have to do some boring things instead. With BaidUTube changing its available formats it seems I finally need my own MP4 muxer. In either case that’s easier than to fix libav.

A tale of three formats

Saturday, April 19th, 2025

Since I have nothing better to do, I keep looking at the odd formats here and there and occasionally do something about them. Here are three formats I took a look at recently and that have nothing in common beside being video codecs.

CGDI

This is a “capture” codec (aka Camcorder Video) if not for the fact that it records rather events than actual image data. I had a suspicion right from the start that it’s organised in the same way as WMF/EMF—opcodes with parameters that invoke GDI subsystem to draw actual image—and finally I’ve decided to check that.

Of course it turned out to be true. There are about 64 opcodes in total, some are for drawing things, some are for window management stuff, and some are for GDI state management (e.g. create or delete brush, pen, font and such).

Since implementing a decoder for it would mean replicating a good deal of Windows graphics subsystem (even if you can borrow code from Wine), I consider it completely impractical and merely improved codec documentation in The Wiki.

TCA

This is an animation format used on Acorn platform. Actually it has three layers of encapsulation: there’s raw TCA format that contains only the header and video frames, then there’s TCA wrapped in ACEF chunk with an optional SOUN chunk following it (no points for guessing what it contains), and finally there’s that format put inside ARMovie (as a single block).

I added its support to NihAV just for completeness sake. Of course not all of different flavours are supported (video is mostly just plain LZW but it has some alternative coding mode and an uncompressed alternative, audio is IMA ADPCM but sometimes it’s not without any reliable way to distinguish which is which). And looks like some animations may have variable frame rate (with DIR1 subchunk likely telling frame durations). All the details are there, in raw ARM binaries and semi-compiled BBC BASIC code, but I’m satisfied that it works at least for a couple of random plane samples I tried and have no desire to try supporting every known sample in existence.

Savage Warriors ANM

This one is a curious format. I’ve managed to locate decoding functions in one of the overlay files, it looked reasonable (LZ77 compression for intra frames and something a lot like FLI delta frame compression for the rest) but the decoder did not work properly. Curiously, demo version contains some of the same animations as the full game but in slightly different format (the initial magic is missing); after comparing them I found out that the release version uses a weird format with a 32-bit value inserted after each kilobyte of data. I ended up implementing my own buffered reader that loads those kilobyte blocks and skips those additional words for the release version.

Another thing is that LZ-compressed format had 17-byte header which the decoder skipped. Of course it made me suspect of being a third-party compression scheme, and after searching around it turned out to be Diet (you may remember it being used as an executable compressor but apparently it had other uses). It somewhat reminded me of MidiVid Lossless as it is yet another codec reusing third-party general compressor (with special preprocessing for executables, which was a dead giveaway).

In either case, both flavours of this ANM format are now supported by na_game_tool (and will be the part of the next release).

Revisiting QPEG

Thursday, March 27th, 2025

Since I’ve done all improvements to NihAV that I wanted to do (beside vague “improve Indeo 3 encoder somehow” or “add some interesting format support”), I decided to look at the formats in my backlog and discovered that I have a QPEG player with a couple of DVC samples. Considering that I’ve REd their VfW codec over two decades ago, I had to look at this as well.

It turned out to be a straightforward format with static palette and video frames packed with moderately complex RLE (it has opcodes for run/copy/skip and literals). The most interesting thing there to me was that values without high bit set are treated as literals, or rather as indices in the remapping table (which is the first 128 bytes of a frame). Considering that low 20 colours of the palette seem to be unset, it makes some sense.

The hardest part was to read the binary specification. The executable uses Phar Lap 386 extender, so it’s actually stored in P3 format right after the loader. At least I have some experience with loading such formats when I messed with Linear eXecutable format before Ghidra plugins were available (and sometimes afterward as well, since e.g. neither of two known plugins managed to load Wing Nuts executable). Also I managed to spot that 0xE0 byte happens at the end of packed frame, so I guessed it was the end data marker and searched for the code using it as such. I’ve managed to locate four RLE decompression functions, all probably functionally identical, and after figuring out other details (like where remap table comes from) I ended up with the decoder that works on all four known samples just fine.

Overall, it’s nothing particularly complex but it was still nice to look at.