First Indeo codecs

August 30th, 2025

Recently I’ve posted a short review of DPCM-based video codecs where Indeo 2 and 3 were mentioned, but what about Indeo 1?

Previously I believed that it’s their raw format aka IF09 (YVU 4:1:0 with 7 bits per component) but recently I’ve discovered a codec called Indeo YVU9 Compressed, which kinda fills the gap between raw video and comparatively complex Indeo 2 (which employs not merely delta coding but also vector quantisation and zero-run coding).

This format codes intra-only frames plane per plane with simple delta prediction and fixed Huffman codes for small deltas plus escape value (which means full 8-bit code value should be read). In other words, a perfect initial DPCM-based codec which can be improved in different ways.

I cannot tell if this codec really deserves to be called Indeo 1 (relegating IF09 to Indeo 0) or it’s some simplification of Indeo 2 that came later. As you know, Indeo codecs come from DVI (no, not the display interfaces) and they had different names. From what I can tell there were three video codec families there: RTV (real-time video), PLV (production-level video, not as fast) and PIC (whatever that is). RTV2 is now known as Indeo 2 but it’s hard to tell which one was Indeo 1 (if there was any) or YVU9C. What’s worse is that there’s next to no software specifications for DVI, you were supposed to use special cards with Intel chipset to encode and decode it.

In either case, it’s yet another codec reverse engineered.

A quick glance at another bunch of codecs

August 23rd, 2025

Since I can’t do anything but look at various codecs, I did exactly that. So here are some details about codecs nobody cares about.

First, I looked at a video codec used in videos (movies and TV series) for certain hand-held console. Despite it coming from Majesco, video data start with VXGB magic, reminding of a certain other codec for a slightly newer hand-held console with its data starting with VXDS. Structurally it’s very close to it as well, being simplified H.264 rip-off. My REing efforts were thwarted by the binary specification organisation: while code is supposed to reside in data segment, it constantly calls functions from fast RAM area with no apparent place where they are initialised. I expect it to be some of that code being duplicated there for performance reasons but I haven’t found the place where that copying is performed. Oh well, nobody cares about the format anyway, why should I be an exception?

Then, there’s a whole family of Pixar codecs. The Toy Story game made by them relied on a bunch of QuickTime codecs made by them. There are decoders provided for pix0pix7 and pixA codecs while the game itself seems to have content only in pix0, pix3, pix4, pix5, pix7 and pixA formats. The binary specification causes Ghidra decompilation failures (mostly in the functions responsible for the decoding) so I could figure something out and something is left as an exercise to a masochist the reader.

All codecs are paletted and most of them operate on 4×4 tiles. Pixar codecs 0 and 4 are actually raw format (with data re-arranged into 4×4 tiles). Codecs 3 and 5 are similar, they maintain a list of tiles (transmitted in the beginning of the frame; frame can update some tiles in the list) and image data is coded as a series of opcodes meaning “draw tile number N”, “leave next N tiles unchanged” or “restore next N tiles from the background image” (that image is stored in some other file, likely compressed with codec 0 or 4). Codec 7 seems to employ static Huffman coding (and I don’t know much beside that fact). Codec A looks like some kind of RLE but I may be wrong.

P.S. I also started some code re-organisation and improvement. For example, I finally got rid of ByteReader/ByteWriter wrappers over I/O objects so it’s less boilerplate code—but unfortunately I’ll need to convert the existing codebase to the new way. I’ve done that for main NihAV repositories but na_game_tool is not yet updated. And I fear I’ll need to waste some time fixing and extending my MPEG-4 ASP decoder (so it can play all videos from my collection). All this leaves not so much time for researching (very) old codecs.

When video DPCM codecs roamed the Earth

August 16th, 2025

Back in mid-1990s there was a situation when video on computers was slowly getting in demand while the processing power was still very limited. Old paletted video formats were slowly going away (Smacker still holding strong though) and for hi-colour video RLE was not a good choice in terms of compression and DCT was not a good choice in terms of CPU cycles required. Thus the niche was partially filled by block-based vector quantisation codecs (like Cinepak) and various codecs that compressed the difference between previous and current pixel in some way (hence DPCM name even if it’s more related to audio compression).

So today I’m going to give a review of these codecs and how Motion Pixels fit in.
Read the rest of this entry »

MVI2: done

August 14th, 2025

I’m almost done with Motion Pixels at last. Of course I skipped implementing some exotic modes but at least the files I could find play fine and don’t complain about missing modes. I just need to put finishing touches and commit it all, probably on Saturday.

The next post should be dedicated to intricate details of the codec(s) and comparison to its better-known competitors with similar design (Truemotion 1/RT/2/2X and Indeo 2/3) but for now all I need to say that frames may be coded in several modes (RGB or YUV with one chroma sample per 2×1, 2×2, 4×2 or 4×4 block), some parts of it may be use low-resolution delta coding (with its own peculiarities depending on line number and sampling mode); and since that was not enough, they’ve added smooth delta coding mode (which also has its own peculiarities in low-resolution coding mode). And of course there’s single-field coding mode. And some features seem to be duplicated using different flags. Since I’ve not found any samples for most of them, I simply implemented basic modes, 4×4 YUV mode with lowres and all YUV modes with optional lowres and smooth delta coding (since movieCD samples seem to exercise them all).

The best part is that nobody cares. NihAV can’t be interfaced with MPlayer easily, discmaster.textfiles.com is not likely to change anything (e.g. files here are recognised as aviAudio type despite having video track and nihav-encoder being able to decode it just fine. Or BATH06.MOV—no link since it’s the only one in the database—which can be converted with the same tool but it’s not even recognised as QT MOV. So I don’t expect that MVI1/2 files will get a video track either.) And I never was Aware caring about the codec much, not having any content coded with it for starters.

Anyway, with this waste of time is over, so what’s next? While searching for the samples I’ve found a couple other MVI formats that may be good candidates for na_game_tool. There is a lot of janitorial work for NihAV as well (for example, I want to rewrite AVI demuxer—it’s one of the first pieces of code I implemented for the project and now I see that some things could’ve been done differently and better). And I’ve finally decided on a name for a new tool: na_eofdec (NihAV exotic/obscure formats decoder). Now all is left is to RE and implement enough formats for a release of both of those tools.

Don’t expect any of this happening soon though, I am lazy and work on it only when I’m in the mood. For example, this post might’ve been about why wavelet compression for video (and images) inherently sucks—but I still haven’t got in a proper mood for writing it.

MVI2: some news

August 8th, 2025

First of all, here’s some information for the context: MVI codecs rely on out-of-band flags to signal what capabilities and subsampling they use (the fact that they decided to store those flags in FOURCC is a different annoyance); and despite the potential variety, only couple of flags are used for each codec. For instance, of all MVI1 files I saw only one flag has been in use (golden frame—and it’s only in one game). MVI2 has two distinct sets of flag combinations, 0x804 and 0x200. The former means bog standard MVI coding (with one chroma sample set for 4×4 block) plus one extension feature, the latter means MVI2 version 2 (if that makes any sense) where they decided to make subsampling and features selectable per frame (as well as adding more of them) and moved them to the frame header while at it.

So far I concentrated my efforts on format 0x804 to see what the feature it is. It turned out to be low-resolution deltas, just like Truemotion 2. In this mode every odd pixel is coded as previous pixel plus half of luma delta for the next pixel. I still have to make the reconstruction run properly, but that’s nothing a lot of debugging can’t fix.

This should allow me to decode even some of MovieCD samples (including the one hidden in samples.mplayerhq.hu/drivers32 for some reason) and I’ve seen quite recognizable frames already.

It’s hard to tell what features the other flavour uses but it’s reasonable to assume that it uses lowres coding as well. Hopefully I’ll get to it soon.

Update from Saturday: after dealing with the annoyance of different deltas coding scheme per each line type, I can now decode the few files I could find (including a couple of movieCDs from archive.org) just fine. The second half seems to use an alternative composing/rendering functions and reads maps differently as well. So while it’ll take more time, at least I’m closer to completion.

MVI1: done

August 3rd, 2025

In last post I wrote about how I’ve managed to reconstruct a recognizable picture for MVI1 codec. After I fixed the prediction code it started to work properly. Surprisingly, Treasure Quest game proved to be a source of MVI1 files in all formats (RGB, YUV422, YUV420, YUV410 and YUV4½0—the last one has one set of chroma samples per 4×4 block and is the most common MVI format in general). Additionally it has MVI1 samples with golden frame feature (I named it after a feature in a family of competing codecs that started with rather similar coding approach): frame 0 is two intra frames with the second frame serving as the background for the other frames; there is an additional map mode which tells that certain rectangles should be copied from the golden frame (instead of previous frame or filled with one colour). MVI2 seems to have an extension of that mode but I’ll see about it when I get to it (and if I obtain samples using that mode).

So, MVI2 next. Considering the number of extensions they added (and how they interfere with frame reconstruction) it’s probably not going to be easy but now I have a base to extend instead of blind guesses to make.

Motion Pixels: breakthrough!

August 1st, 2025

As I mentioned last month, I decided to reverse engineer Motion Pixel codecs and after a lot of failed attempts to make decoder work I’ve finally got something.

First of all, here are two frames from different videos.

MVITEST.AVI:

And a frame from SHOT09.AVI (from Apollo 18 game):

As you can see, the images are not perfect but recognizable already. And just last week it would’ve been a mess of colours with some barely recognizable distorted shapes.

The reason for this is that while MVI1 (and MVI2) is indeed based on stand-alone MVI format (I call it MVI0 for clarity), there are some nuances. On the first glance MVI0 and MVI1 are the same—all steps are the same—and indeed you can use the same code to decode data from either, but the reconstruction steps differ significantly.

Essentially there are four steps there: decode rectangles defining which parts of the frame will be left intact or filled with one colour, decode deltas used to reconstruct the rest of pixels, use those deltas to generate predictors for each line (where needed), use the rest of deltas to reconstruct the rest of pixels. Additionally MVI employs chroma subsampling mode so only one pixel in 1×1 to 4×4 block (depending on mode) has delta differences applied to chroma, all other pixels update only luma component. So if you don’t do it correctly you may end up applying e.g. deltas intended for luma to chroma components and vice versa. That’s what I got for a long time and could not understand why.

It turns out that vertical prediction has its pixel sampling at different position—or maybe it’s scrambled in the same way as line prediction. There for the most common mode (one set of chroma components per 4×4 block) each group of four lines is decoded in reverse order (i.e. 3, 2, 1, 0, 7, 6, 5, 4, …). For 2×2 block only lines in pairs are reversed. You can see artefacts of wrong prediction on Apollo frame.

Anyway, having a recognisable picture means that the hardest part (for MVI1) is done, so all is left now is to fix the remaining bugs, refactor the code and move to MVI2. There are other annoying things there but now I know how to deal with them.

BTW, if you’re curious why it takes so long, the problem is the binary specification being obfuscated to the point that Ghidra refuses to decompile most of MVI1 decoder functions and can’t do much about reconstruction functions since they’re a mess of spaghetti code (probably written in assembly language directly) so it’s more of a state machine than a decoding loop. And they abuse segment registers to access different parts of the context (and this must be the reason why it cannot work under OSes from this millennium). I got some progress when I resorted to debugging this mess by running MVI2 player OllyDbg under Win95 (emulated in DosBox-X) and constantly referring to Ghidra to see where to put breakpoint to trace a certain function. That process is definitely not fun for me but it gave results.

Overall, probably it could’ve gone better but I hope the rest won’t take as long.

Random NihAV news

July 24th, 2025

Since I have not tweaked any weights and have not made any releases, I’ll just write about some stuff I’ve been working on but have not released yet. Meanwhile librempeg got support for a bunch new formats too so its changelog may be a more interesting read. Anyway, this post is about what I have (and haven’t) done.

First of all, I’ve finally fixed an annoying problem with VA-API decoding on one of my laptops. Counterintuitively, it turned out to be faster to request hardware to convert native surface into some other format (NV12 into YUV420) and then use it instead. This made decoder CPU usage drop under 10% at last. Probably it can be optimised further to reduce load on graphics subsystem but I’d rather not mess with OpenGL unless it’s really really really needed.

Then I expended support for two formats in na_game_tool. VDX (used in The 7th Guest) had a different format version for the game demo. It still employs two-colour VQ but data for intra frames is split into separate parts for masks and colours, and inter frames code updates to masks and/or colours for each block instead of independent decoding. Additionally thanks to Discmaster I’ve located DPEG version 2 which employs completely different algorithm from the version 3 (painting 4×4/2×2/1×1 squares for intra and skip/update for inter).

I’ve also discovered some new interesting formats like Lantern MOV (which codes DIB changes using 16-bit RLE and there’s a probably related older version in IFF instead of RIFF). I’m considering making a sister project to na_game_tool to decode various formats like this one, formats coming from Amiga, recording formats and such—for all the formats that I’d like to try decoding but don’t want in main NihAV. I’ll write about it when I actually have something to write about (i.e. when I have a name and enough formats for 0.1.0 release). Another curious find was fractal video codec—not the ClearVideo but something with fourcc FVF1 from Images Incorporated. Who knows, it may be interesting to RE.

And finally here’s what I really wasted too much time on: Motion Pixels decoders. It has rather annoying binary specification (like using segment registers to address decoder context variables) that decompilers refuse to translate and from I heard it’s impossible to run on anything newer than Windows 95 or NT4. Nevertheless the formats pose some interest.

From what I saw long time ago, MVI2 is MVI1 with certain extensions, and MVI1 is surprisingly close in the structure to MVI in its own format files—and Gregory Montoir has reverse engineered it long time ago.

So I started by reimplementing that MVI decoder (since I can debug its behaviour against known working implementation) while trying to understand what it does. I got it more or less working (reconstruction is still not perfect but at least it’s recognizable) and my decoder supports other files (found with Discmaster of course) that trigger demuxer bugs or have different subsampling modes.

Then I moved to implementing MVI1 decoder applying the differences found in the binary specification. While it still does not handle decoding properly (both the pictures are garbled and I don’t use all deltas stored in the frame), at least it proves I’m on the right way. Hopefully it’ll decode properly soon and then I can add MVI2 features. Of course it’s a useless format nobody cares about, but apparently I do.

Strata, or yet another reason for not living in the USA

July 8th, 2025

Disclaimer: this post is not about politics at all, but rather about my personal reason.

Here’s the story that finally made me realise why USA is not for me (beside many other reasons that have something to do with my tastes. And don’t label them as sour grapes—I got some job offers from there back in the day yet I rejected them in favour of Europe).

I like to spend week-ends and holidays travelling around. Before 2020 I liked to travel somewhere far and travel around that distant point, now I can take only local travels (for health considerations). Last Sunday I decided to visit Seligenstadt but thanks to Deutsche Bahn I missed the connection and had to wait for almost an hour at Hanau. Since I had nothing better to do, I decided to take a walk there and was shocked.

The town turned out to be not merely Grimm (being the birthplace of the famous brothers) but also grim and soulless. Essentially all it had to offer in historical buildings was its town hall and a church (maybe I could discover more but probably not in the town centre and definitely not in the time I had). You don’t need to read any documents to guess that Hanau was heavily bombed during WWII and nobody bothered to reconstruct it (it’s much more important to have a bank and an airport after all).

And then I saw Seligenstadt, which is a complete opposite, with a preserved historic centre next to the more than millennium-old monastery. That’s what made me realise that I can’t live in a space without history comfortably. And USA is exactly a country that is rather poor in that aspect (compared even to Mexico). For comparison I consider my home city rather young—and yet it had been founded before New England royal colonies were chartered let alone USA as a country appeared on the maps. That is why I subconsciously liked Europe; probably other things I like about Europe (like food) also have roots in its rich historical soil. Similarly probably a lot of things I dislike about USA also come from its lack of historical soil (again, like food).

Of course other people don’t care about such things, which means less competition for them from my side and vice versa.

P.S. In case it was not obvious, this post name comes from the early Pratchett’s novel Strata, where one of the ideas was that newly terraformed worlds also included specially-crafted fossils in different geological strata—because humanity does not feel right living on a planet without history. It turned out to be true at least for me.

News and foam

July 4th, 2025

…the knowledge of certain principles easily compensates the lack of knowledge of certain facts.

Claude Adrien Helvétius, De l’esprit (1758)

Today I want to rant about a different aspect of the world. There is a constant stream of what is called news every minute, but if you take a closer look at it most of those pieces of news are not worthy of any attention. That is why I distinguish news—pieces of information about events that affected something—and foam—derivative pieces that bring no useful information, taking more volume than the original news and quite often used to obscure the original source. If you have a suspicion that it applies to other produced content (like “X reacts to Y” videos) then you may be right.

Anyway, suppose there is some planned event X. Usually the reports related to it will go like this:

  1. X is going to happen!
  2. random unrelated person comments on X;
  3. famous self-proclaimed expert estimates impact of X;
  4. X is scheduled to happen in next couple of days;
  5. X happens;
  6. the administration comments upon X success/failure/whatever;
  7. random unrelated person comments on X;
  8. random tangentially related person comments on X;
  9. aftermath of X.

It should be obvious that #5 is the real piece of news, with #1 and #4 having some importance (and #9—but only in the case when that event had unforeseen consequences). The rest is just a filler for news feed (those ad views won’t generate themselves, you know). This may be done to keep interest to the topic itself, but then it’s propaganda and not really news.

The statement from the epigraph can be applied to the news as well: if you know how it normally goes you don’t need to follow the news. Here’s a typical example for news I care about: russia commits umpteenth war crime (the fact by itself is no news, it’s circumstances that make it news); Ukrainian president / minister of foreign affairs / other official condemns it (that’s what they always do so it’s no news); some European official expresses condolences (still no news); russia celebrates and boasts how it hit important military target (which almost every time is a civilian infrastructure—tenement house, post office, hospital and such; but russians lying is no news either); USian administration trying their best to ignore the fact that russians did it (if you haven’t spotted the pattern, it’s still no news). There may be some follow-up pieces of actual news eventually (rescuers finishing operation, new bodies discovered, some victims of the attack dying at the hospital, a local mourning day being declared if the victims count is too high) but they do not add much to the picture.

Similarly news from the USA are rather irrelevant if you know a couple of things that have happened recently: USians elected a chaotic president, who decided that it’s time to cash on all the goodwill USA has been building since 1940s; his favourite tool is tariffs; his team consists mostly of people picked for their loyalty and not intellect; after events of 2020 he decided that the system of checks and balances hinders him and should be dismantled. So every time I see something about his administration violating the law with no repercussions, members of it proving themselves incompetent with no consequences, tariffs being declared and/or imposed on some random country and then waived again—those are things to be expected. Even the split of two greatest lovers on Earth was only a question of time—and when it happened was a real piece on news, unlike what they wrote about each other in their own social networks (it may be interesting to the future historians and current stand-up comedians though). And if you remember the phrase “Will no one rid me of this turbulent priest?” then subordinates acting without explicit president’s order is no news for you either.

Similarly global EU news are non-existent if you remember that important decisions require consensus—and there’s Hungary (and Slovakia time from time) using its veto power to extract benefits (from both EU and russia, and occasionally China). And of course member countries not willing to spend money on infrastructure and defence are no news either.

In conclusion I want to say that while thinking hurts, it can still save you time. Sometimes important news happen, but mostly you don’t even need to scan news headlines that thoroughly.