Author Archive

Some words about DMV format

Friday, November 14th, 2025

I said it before and I repeat it again: I like older formats for trying more ideas than modern ones. DMV was surprisingly full of unconventional approaches to video compression.

First of all, it uses fixed-size blocks with palette and image data occupying initial part of the block, and audio taking fixed size tail part of that block.

Then, there’s image compression itself: frame is split into 2×2 tiles that can be painted using 1-4 colours using pre-defined patterns, so only colours and pattern indices are transmitted. Those indices are further compressed with LZ77-like method that has only “emit literals” and “copy previous” (with source and destination allowed to overlap but with at least 4-byte gap for more efficient copying).

And even that is not all! I actually lied a bit because in reality it is 2x2x2 tiles since one pattern (using up to 8 colours) codes two 2×2 blocks in two consequent frames simultaneously. Now that’s an idea you don’t see every day. There are many codecs that code colours and patterns separately (like Smacker or System Shock video), some use a variation of LZ77 to compress data even further, but the only other codec I can name that coded two frames at once would be H.263+ with its PB-frames (and it was mostly interleaving data in order to avoid frame reordering, not sharing common coding data between frames).

Searching discmaster.textfiles.com for large unrecognised files returned more hits so I hope that some of the remaining formats will also feature some refreshingly crazy ideas. Now I only need time and mood for reverse engineering.

Quickly about VC2

Wednesday, November 12th, 2025

As I mentioned some time ago, Paul has shared a decoder for bytedanceVC2 codec. From the first glance it looked like H.266 rip-off. After a more thorough looking it seems to be more or less plain H.266.

There are some tell-tale signs of codecs using well-known technologies: simply listing strings in the binary gives you “VPS/SPS/PPS/APS malloc failed” (with APS being adaptation parameter set, which was not present in H.EVC but is present in H.266), “affine merge index is out of range” (again, affine motion was added in H.266), “lmcs_dec” (chroma-from-luma, another feature not picked up by H.265 but which was fine for H.266), “alf_luma_num_filters_signalled_minus1” (a variable straight from the specification). I hope that this is convincing enough to claim that the codec is based on H.266 (or VVC).

Now what would a rip-off codec change? In my experience such codecs keep general structure but replace entropy coding (either completely or at least change codebooks) and DSP routines (e.g. for H.264 rip-offs it was common to replace plane intra prediction mode with something similar but not exactly the same; motion compensation routines are the other popular candidate).

I cannot say I got too deep into it but the overall decoding is rather straightforward to understand and from what I saw at least CABAC was left untouched: it’s exactly the same as in the standard and the model weights are also exactly the same (at least the initial part). Of course that does not preclude it having differences in DSP routines but I seriously doubt it being different from the standard (or some draft of it).

And with that I conclude my looking at it. Those with more motivation and actual content to decode are welcome to try decoding it, I have neither.

Archive extraction support in na_game_tool

Saturday, November 8th, 2025

Back in the day I said that I want 0.5.0 release of na_game_tool to be more than a mere decoder and also support game data archive extraction. And what do you know, less than half a year later I’ve finally started to work on it.

The idea is rather simple: use the same mechanics to detect archive type, create a plugin that will go through archive contents entry by entry, use information provided by it to create an output file and provide a writer for plugin to dump contents to.

Of course such functionality is present in many other tools, but this one will make na_game_tool more than just a decoder for some game multimedia formats, it will support the formats I somewhat care about, and it can perform some format conversion while at it. And it will allow me to offload the throwaway code I wrote for extracting such formats so that knowledge won’t be lost (as well as allowing me to use an existing framework when I need to create another extractor). With this the tool becomes more useful to me.

Here’s a simple example: the first archive format I implemented is for Access Software .ap files. The archive contains merely offsets where next file starts and nothing more. But for convenience I added a bit of code to look at the first bytes of the extracted file and give it a reasonable extension if possible (e.g. for ACC, DBE, H2O, PTF and WAV files).

Here’s another example: third (and currently last) supported archive format is Tsunami Media RLB. Beside mere extraction of all files (which is a necessary step for supporting their animation formats), it also outputs a proper image instead of raw image data for Mehan Enough (that involves remembering last decoded palette and scaling image twice horizontally). In the future I’d like to support images from other games like Ringworld but that would involve a bit more work.

Ultimately I hope to keep such hybrid approach to data extraction: convert simple formats (image and audio) in the extractor plugin if it makes sense and leave more complex formats to the other decoder (which may be na_game_tool, na_encoder or some third-party software). And I have a good candidate for that: Monty Python & the Quest for Holy Grail. You can see a page describing formats from that game linked in the top corner of this blog and even find a couple of posts about it. I guess that means that I care.

Of course it will take more months to make the next release (I’d still like to keep the rule about decoding at least twelve new formats based on my own research) but I’m not in a hurry anyway.

FFpropaganda

Saturday, November 1st, 2025

Recently this tweet was brought to my attention (by somebody else who saw it and found hilarious). While I agree with the message (I also find those OMGsecurity people annoying and rather counter-productive), the premise is bullshit, namely those two lines:

Arguably the most brilliant engineer in FFmpeg left because of this. He reverse engineered dozens of codecs by hand as a volunteer.

So I’ll try to explain what’s wrong with that claim and FFmpeg twitter account (or FFaccount for short) in general.
(more…)

IFS Fractal codec

Friday, October 31st, 2025

As I mentioned before, apparently this format got popular enough to be licensed and used in three different container formats for three different goals (generic VfW codec, game video codec and interactive video player). Another curious thing is that the codec operates in GBR420 colourspace (yes, that means full-resolution green plane and down-scaled red and blue planes—something between Bayer and YUV420). Oh, and of course the fact that it uses true fractal compression makes it even more interesting.

Even if the codec operates on full 8-bit values internally, the reference decoder outputs either 16-bit RGB or paletted image (new palette is transmitted for some frames, usually intra ones). And it’s worth mentioning that the decoder always works on 320×200 frames (probably for simplicity), IFS image format does not have that limitation.

Internally the codec operates on 4×4 blocks grouped into 8×8 super-block so that some operations may be performed on whole 8×8 blocks instead. Planes are coded as green first and red plus blue plane next to each other second, both parts being coded independently (i.e. codec format always codes source block offsets related to the beginning of the current plane and switches them mid-decoding). Overall, decoding is straightforward: read frame header data, start decoding block opcodes for green, continue with block opcodes for red and blue, do some operations depending on header flags, output new frame.

There are several known header flags:

  • 8—repeat last frame;
  • 4—swap frame buffers (and output previous frame after decoding into the new current frame);
  • 2—palette update is present (the first header field is an offset to it);
  • 1—this is a fractal (key)frame, it should be decoded 16 times.

Yes, it’s the hallmark of the true fractal compression: it does not matter from which source you start (here it’s planes filled with 0xAB value in the very beginning), after 7-10 iterations you’ll converge to the desired image (sounds perfect for error resilience, doesn’t it?). But since it’s a computation-costly process, inter frames merely do some kind of refinement (including motion compensation).

Block opcodes are several bit fields packed into bytes LSB first. First three bits are main operation ID, seven being a signal for an extended operation with an additional 4-bit operation type. Here’s a table with them all (unlisted extended opcodes do not exist and make the reference decoder crash):

  • 0-3—affine transform block
  • 4—skip next 1-32 blocks;
  • 5—motion compensation for the next 1-256 blocks;
  • 6—raw block data follows;
  • extended 0-7—affine transform for 8×8 block with an optional refinement for one of 4×4 blocks (in that case another 3-bit opcode is read and applied; the meanings are the same except that skip and motion compensation should apply only to one block and extended opcodes are not allowed);
  • extended 12—skip 33 or more blocks;
  • extended 15—raw 2×2 block put at absolute 16-bit offset. I doubt it’s ever been used.

Motion compensation is performed by copying block from the previous frame using one of up to 16 possible motion offsets transmitted in the frame header. This approach was not unusual back in the day (Indeo 3 immediately comes to mind).

And now for the juiciest part, namely affine transforms. Fractal coding works by finding a larger block (called domain block) which (when down-scaled, rotated/flipped and brightness/contrast adjusted) will correspond to the current one. Here domain blocks are always twice as large (with down-scaling performed as taking each even pixel at every even line) and are located at even positions (so 14-bit index is enough for them). Contrast adjustment is done as pix*¾+bias, with bias being in -64..63 range (so 7-bit index is enough for it). The transforms itself are described by their bit masks: bit 0 means source block should be mirrored (i.e. left becomes right and vice versa), bit 1 means it should be flipped (top becomes bottom and vice versa) and bit 2 means it should be transposed (i.e. pixel (1,2) becomes pixel (2,1) and such; this operation is for 8×8 blocks only).

That’s all! It was good enough to compress video with 4-10x ratio (or twice as much if you treat it as 16-bit video instead of paletted one) without the usual blocky artefacts present in other codecs. And it applied inter-frame compression in order to save both space and decoding time. While that part was not a proper fractal compression, affine transforms were still used there (it reminds me somewhat of certain French game formats that combined motion compensation with mirroring or rotating).

The sad thing is, this is probably the only true fractal video codec in existence. Writing a fractal image compressor is so simple everybody can do it as an experiment, making a proper video codec is apparently not. Even ClearVideo while being from the same company and believed to be a fractal codec is not one in reality—key-frames are coded a lot like JPEG and the only thing common with fractals is using quadtrees, copying blocks from elsewhere, and adjusting brightness when copying blocks. If not for the company name one would not think about it as having anything to do with fractals.

And as a bit of philosophy, it looks like this codec was adopted from the original fractal image encoder (as I said before, this kind of compression looks like the first straightforward scheme for fractal compression as it’s usually described in the literature) and adopted it to video by adding skip blocks and motion compensation. Then they probably started experimenting with better fractal compression—using blocks of different size and quadtree to describe that, better compression of block parameters. Then at some stage they discovered that it was much easier and faster code DCT blocks for key-frames and plain motion compensation is good enough—that’s how they ended up with ClearVideo. And then they discovered that their newer products can’t compete with other codecs and the company went out of business.

To credit where credit is due, I must say that the company got one thing right: the majority of the future video compression was searching extensively for the matching blocks, so if they started it a bit later and applied their know-how, they could’ve ended with a very competitive video encoder by now.

Why wavelets is a bad choice for image coding

Thursday, October 30th, 2025

I’ve been teasing it for too long and finally the time for this post has come. I really have a reason to believe that wavelets are not the best choice for image compression, so here are my arguments.
(more…)

The most horrifying town in Germany

Sunday, October 26th, 2025

I never had a reason to care about Halloween, and nowadays real world news are more scary than any imagined horrors. Yet I remembered one of my old trips so why not mention that curious fact.

There’s a town somewhere between Frankfurt and Frankfurt-lowcost airports named Oppenheim. People coming from there are obviously known as Oppenheimer, as well as their descendants and people marrying their descendants like the famous Lillian Oppenheimer (I know her as the person who popularised origami but apparently she’s a mother of several well-known mathematicians as well) and some guy claiming to become death, destroyer of the worlds.

But the town itself is more sinister, even if you disregard its catacombs. There’s Frankensteiner Hof—or residence of Frankenstein (maybe the descendant moved to Bavaria and got famous there). As for real monsters, just around the corner from the previous landmark they have a street named simply Zuckerberg—no suffixes for street or alley at all, just Zuckerberg.

It’s much better on the other side of Hessen—in Bavarian Miltenberg they have plainly named Hauptstraße (meaning simply “main street”) and parallel to it runs Mainstraße, specially for the foreigners who don’t understand German.

Slow NihAV week

Saturday, October 25th, 2025

I don’t have much energy to work on stuff, so I spent most of my time doing nothing—and occasionally working on fractal formats decoder.

But first of all I’d like to tell that I’ve added another format and a half to na_game_tool. I’m talking about the full-screen animations in Hopkins FBI game. The format by itself is very simple: 640×480 frames, first one is stored uncompressed, the rest use extremely simple compression (values 0x00-0xD2 are pixel values, opcodes 0xD3-0xDD are used to code runs, opcode 0xFC signals end of data, the rest are used to code skips). Why I called it a format and a half? Apparently what I described is true for .anm files that are used to code FMV cutscenes, but there are also .seq files that have the same structure but no opcodes for runs (those are normal pixel values there). Curiously, demo version of the game had ANM.EXE which can also both decode and encode .anm files and has helpful messages too (if you know French).

Anyway, back to the fractal compression. I’m still working out wrinkles in TMM-Frac decoder but it gives a recognizable picture more often than not. Fun thing is that back in the day Alyssa Milburn decompiled the same decoder in FVF (for ScummVM engine) and the video decoding is the same, only container is different. Unfortunately it is a decompile so it reconstructs the original code in the binary specification in C form with minimal improvements (see for yourself). Mind you, it’s a great accomplishment by itself, considering how the code in question is tricky even for modern decompilers (mostly because it uses segment registers to access different kinds of data and similar tricks). But since I care more about understanding how it works than having a working decoder, I’m fine with having a buggy implementation that I can fix eventually.

Here’s a sampler of quickly hacked FVC1 decoder (frame 70 from fernlogo.avi if anybody cares) made by copying my current TMM-Frac decoder core. As you can see, there’s still a lot to fix but there is some progress there too. Mostly it serves as a proof that it’s the same technology in all three cases (I’m yet to write an FVF decoder but it’s undoubtedly the same compression).

Of course when I finish it I’ll try to write a nice format description as it is a bit more complex than “apply affine transformation and contrast adjustment to that block” of pure fractal coding.

Meanwhile Paul has shared a byteVC2 decoder with me and I have to look at that codec eventually (big spoiler: it looks like H.266 rip-off considering how the binary specification mentions ALF, SAO, WPP and such). So many things to procrastinate looking at!

A pair^Wtrio of exotic formats

Tuesday, October 14th, 2025

If it looks that I’m not doing anything, that’s about right. Nevertheless I’d like to discuss two exotic formats that I’d like to write decoders for.

The first one is unlike most of the video codecs I’ve seen so far. For starters, it uses fractal compression. Not surprising since it comes from Iterated Systems. And unlike later ClearVideo, it is really a fractal codec. From what I see, it works exactly like the textbook example of the fractal compression: split video into small fixed-size blocks, search for a domain block, apply simple affine transform on scaled-down version of it plus brightness scaling and output the result. There are additional possible operations like leaving blocks unchanged or reading raw data for a block. Since this works only for the greyscale, frame is stored in YUV420 format, planes coded sequentially. Unfortunately since the binary specification is mixed 16/32-bit VfW driver that Ghidra can’t decompile properly, the work on it goes at glacial speed.

The other codec is like the previous one but it has its own container format and DOS player. It comes from TMM—not The Multimedia Mike but rather the company known for RLE-based PH Video format. I don’t see mentions of Iterated Systems in the binary specification, but considering how similar this FRAC codec is to theirs (it uses the same bitstream format with the same opcode meanings and the same assembly instructions) I expect they’ve licensed it from Iterated Systems.

So hopefully when I actually finish it I’ll have two decoders for the price of one.

Update: while refreshing the information about fractal compression, I discovered in the Wickedpedia article on it that two companies claimed they got exclusive license for fractal compression algorithm from Iterated Systems—TMM and Dimension. The last one licensed it to Spectrum Holobyte to be used for FMV. And what do you know, that explains why FVF is named so and why its video bitstream syntax is the same as in the other two (and the code seems to be the same too). So I guess it means I’ll have almost the same decoder (but with different containers) in NihAV, na_game_tool and na_eofdec.

Update from November 4: I’ve finally implemented FVF decoder in addition to earlier FVC1 (VfW codec) and TMM-Frac decoders. So the whole trifecta is supported now.

A small rant about compression

Wednesday, October 8th, 2025

The recent news about OpenZL made me think about some tangential issue.

The approach by itself is nothing new really, a lot of archivers include pre-processing step for data (I don’t know if there are an earlier examples, but de-interleaving or delta-coding floating-point data might be only slightly younger than geo file in the Calgary Corpus, LZX includes translating call addresses into absolute offset for better compression etc); more advanced archivers implement flexible processing steps (e.g. RAR had its own custom VM for pre-processing data which was essentially a security nightmare cut-down 8086 instruction set, and ZPAQ which allows to define compression steps for data-specific compression that won’t require a new decoder—in other words, something very similar to OpenZL). There’s nothing wrong with the approach and it’s probably useful outside, say, genomic data compression, it’s just it raises two questions: what is the current general compression/resources accepted trade-off and what would be a good candidate for an open-source archiver?

The first question is obvious: with the times the available CPU power and RAM grows along with the amounts of data to compress. Back in the day gzip was the golden standard and bzip2 was something eating too much RAM and worked rather slow. A bit later .tar.bz2 started to replace .tgz for, say, distribution tarballs. Nowadays it’s .tar.xz or .tar.zstd, which makes me wonder if it’s really the sweet spot for now or if things will move to adapting a compression scheme that’s slower but offers better compression ratio.

The second question follows from the first one: what would be a good candidate, specifically for open-source applications. If you look around, there are not so many of those. You can divide existing formats (don’t confuse them with implementations) into several (sometimes overlapping) categories:

  • proprietary formats with an official open-source decoder at best (like RAR) or unofficial reverse-engineered one (e.g. RAD mythical sea creatures formats and LZNIB);
  • open-source compression libraries targeting fast compression (LZO, LZ4, FLZ, LZF, etc, etc);
  • old open-source compressors (compress, gzip, bzip2, zip);
  • various programs trying to bank on well-known name while not being related (bzip3 and anything with “zip” in its name really);
  • state-of-the-art compressors that require insane amounts of CPU and RAM (anything PAQ-based, NNCP);
  • corporate-controlled open-source formats (brotli, Zstandard).

The question is what would be a good candidate for the next de-facto compression standard. The current widespread formats are good since they’re easy to implement and there are many independent implementations in various languages, but how much can we trust the next generation—the one with flexible input pre-processing (the third question would be if that’s really the design approach mainstream compression formats will take).

For instance, I have nothing against LZMA but considering that its author is russian how much can we trust that he won’t be visited by FAPSI representatives and forced to make changes in LZMA3 design that will make Jia Tan green with envy? As for the formats coming from corporations, are you really going to rely on their goodwill? I think the story with LZW should serve as a warning.

The only reassuring thing is that it is still rather easy to design a new compression scheme and even achieve decent compression ratio and performance (unlike training a neural network or even designing a video codec to rival H.265), so good candidates are likely to appear sooner or later.