This fortnight in na_game_tool

February 15th, 2025

Since last time I’ve added a couple more formats:

  • finally did Dragon’s Lair (PC port) videos—it turned not as bad as I expected after Space Ace;
  • Jazz Jackrabbit 2 videos (from the decompiled decoder in Jazz² Resurrection)
  • finally implemented Xilam DERF format support (audio and video);
  • ported FutureVision FCMP and FST decoders from main NihAV;
  • freshly implemented scene format for Psychic Detective;
  • and Cyclemania video support.

If you for some unfathomable reason wonder when it will be released, I can answer: not soon.

As I mentioned before, I intend to make a release with at least a dozen of new REd formats and I’m still far from that goal. Since version 0.2.5 I’ve added support for about fifteen formats but most of them don’t count: some formats were ported from NihAV, some were based on other sources (like game engines or—for some planned additions—entries in The Wiki not based on my research), some are of poor quality and don’t really count. This means only about half of those formats is eligible—and I don’t know where to find another half (and that’s not counting other things I’d like to work on).

And there’s another small but irritating detail: so far I’ve implemented decoders for all but three letters of alphabet. Formats with names starting at ‘o’, ‘w’ and ‘y’ are still missing. Of course I can easily fill the gaps by supporting Origin Xan, Westwood VQA (or CPS) and Psygnosis YOP but I’d prefer to have something original instead of well-known formats supported elsewhere.

CinEApak

February 11th, 2025

First of all I’d like to blame Peter Ross for ignoring this peculiar format from his favourite company. Apparently he prefers to dig Lead instead.

Anyway, as I was scraping bottom of the barrel for FMV game formats to support in na_game_tool (and I still haven’t found good candidates for some letters, a rant about it will come later), I encountered a game called Psychic Detective.

It stores video in large .str files with some external means to signal where a segment starts and what format it has (as it may switch between 160×120 and 320×240 video and 11kHz stereo and 22kHz mono audio).

Audio is bog standard IMA ADPCM, video is while being intra-frame only is much more curious.

First of all, it employs vector quantisation with the same approach as Cinepak: there is a codebook of 2×2 blocks that get scaled to 4×4 output block, there is a codebook of 2×2 sub-blocks used to compose 4×4 output blocks, there are block mode bits telling whether it’s one- or four-vector block, and finally there are vector indices. And for larger (320×240) videos there is even a second codebook that replaces four-vector codebook after a provided line, a lot like Cinepak slices. The main difference is the structure: while Cinepak has more flexible frame structure with chunks for codebook updates, indices and such, here all data is stored in the fixed order with its sizes transmitted in the frame header.

Oh, and there’s another minor detail: frames simultaneously code 8-bit paletted and YUV video (rendered as 15- or 16-bit RGB). This is achieved by transmitting actual palette and duplicating codebook information first as palette indices and then as YUV blocks (so the decoder can decide which one to use). Also YUV to RGB conversion is simplified to look a lot like Cinepak formula as well.

I keep repeating the same thing again and again because it still holds true: those old codecs are much more interesting to look at because sometimes you find out crazy methods like this one.

A bit on USA

February 2nd, 2025

As usual, I don’t have to say what has been put to words in much better way by somebody else already.

At the banquet, last winter, of that organization which calls itself the Ends of the Earth Club, the chairman, a retired regular army officer of high grade, proclaimed in a loud voice, and with fervency,

“We are of the Anglo-Saxon race, and when the Anglo-Saxon wants a thing he just takes it.”

That utterance was applauded to the echo. There were perhaps seventy-five civilians present and twenty-five military and naval men. It took those people nearly two minutes to work off their stormy admiration of that great sentiment; and meanwhile the inspired prophet who had discharged it—from his liver, or his intestines, or his esophagus, or wherever he had bred it—stood there glowing and beaming and smiling, and issuing rays of happiness from every pore—rays that were so intense that they were visible, and made him look like the old-time picture in the almanac of the man who stands discharging signs of the zodiac in every direction, and so absorbed in happiness, so steeped in happiness, that he smiles and smiles, and has plainly forgotten that he is painfully and dangerously ruptured and exposed amidships, and needs sewing up right away.

The soldier man’s great utterance, interpreted by the expression which he put into it, meant, in plain English—

“The English and the Americans are thieves, highwaymen, pirates, and we are proud to be of the combination.”

[…]

The initial welcome of that strange sentiment was not an unwary betrayal, to be repented of upon reflection; and this was shown by the fact that whenever, during the rest of the evening, a speaker found that he was becoming uninteresting and wearisome, he only needed to inject that great Anglo-Saxon moral into the midst of his platitudes to start up that glad storm again. After all, it was only the human race on exhibition. It has always been a peculiarity of the human race that it keeps two sets of morals in stock—the private and real, and the public and artificial.

And here’s a link to the full writing in case you haven’t read it already.

Professional metric benders

January 28th, 2025

Today on “things that Kostya cannot change so he rants about them instead” is something different from the usual political or open-source political rants.

There are several groups of people whose occupation is (in theory) to evaluate certain things. So (again, in theory) you can call them the metric for those things. In practice though they rather do the opposite and try to make things conform to the valuations they give, or at least to make public perceive those things in the way that would confirm the original claims (and truth be damned!).

Of course some would see nothing wrong with it, others would even try to tell you that they’re always right because they cannot be wrong and thus only their opinion is the true one. Well, I’ll present three examples and see for yourself.

Let’s start with the most prominent example, namely lawyers. In an ideal world, lawyers are a part of judicial process, making sure that the side they support is represented fairly—and that means that judging is done according to the laws, without glaring mistakes or prejudices. In practice though lawyers tend to get associated with paid justice, meaning that quite often the outcome of the trial or litigation depends on the pay-grades of the lawyers involved instead of actual known facts (or even laws). Which sometimes leads to the fun systems as the British one with two mandatory kinds of lawyers (barristers and solicitors) and USian one—resembling quantum dynamics—where you can call lawyers elementary particles responsible for any interactions between entities (except that quantum dynamics is easier to comprehend).

Then there’s another often disliked group of people called philosophers. In theory philosophy is a way to explain the world or some of its aspects. So one would expect philosopher to be a thinker who studies the world (or part of it) and makes some conclusions about how it works and what implications that gives for the rest of the things. For instance, science may study human morals as a thing emerging in collectives and affecting interactions between members of those collectives, while philosophy may ponder how morality defines human itself and what should be considered the ideal moral. But the modern philosophers seem to work in the reverse: first they start with a conviction (quite often a small one and benefiting them directly) and work up from that to build a system that will provide an excuse for their beliefs. Of course this is unlikely a modern trend, but history preserved enough examples of real philosophers for any epoch and different countries as well—which is hard to say about the modern world.

And finally art critics. One would naïvely expect them to be people with certain tastes who appraise certain kinds of art (paintings, sculptures, books, movies, video games etc etc) and tell public their opinion about it. You may like them or not, agree with them or not, but in either case such reviews should not merely give an abstract score but also provide an explanation of what was done right, what could be improved, and what hidden qualities may make it even better than your first impression told you. There’s a reason why people still remember and quote Roger Ebert (of Chicago Sun-Times) or Scorpia (of Computer Gaming World). But the majority of the modern art critics seem to start from the premise of having to praise the reviewed product (often, apparently, out of fear for the salary and other benefits—disappointed owners of the badly-reviewed product may stop advertising in your media or provide early access to the next products they release and so on) and construct up the review leading to the goal without mentioning the actual reasons. What makes it worse is that often it’s accompanied not by the notion “if you love X and Y then this is definitely a thing for you, otherwise you may want to skip it” but rather “if you don’t like it you’re a dumb bad person”. Thanks, I still remember a bit of Soviet Union to reject it at a visceral level.

So there you have it. Of course this effect is nothing new but I felt that for some reason I need to say it, so here it is.

This month in na_game_tool

January 21st, 2025

Since I’ve managed to locate a couple of FMV games I decided to make a short digest on what I’ve been working (and will keep working for a bit longer).

Ravenloft ANM

This is a predecessor of DFA, the first format implemented in libav. It is simpler but has its own peculiar decisions: interframes may be updated as a series of line segments on each column, there’s very interlaced raw picture format (for 4×4 blocks), and there’s a special picture mode which tells “decode this other delta picture N times” (though I’ve not managed to locate a sample for that).

ReadySoft scenes

This company is known for the ports of the original FMV games (you know, arcade ones using Laserdisc). I’ve managed to locate samples from three of them, namely Dragon’s Lair, Space Ace and Brain Dead 13. All three turned out to be in different formats while sharing some common traits.

I’ll start with last game and keep going in the reverse. Its videos are stored in one single BD.DAT file but are easy to extract (32-bit header size followed by the TOC with 32-bit video ID, offset and size). Videos themselves are composed of two layers—background and sprite(s). Both are updated independently and the final image is composed from them. Background data employs RLE compression, sprite employs RLE with skips. What makes it remarkable is that it not merely updates background, but also issues commands on how to scroll it around.

(Side note: this makes me think about the experimental FFV codec that used field-of-view classification to split video into layers for better compression; IIRC the technology was sold for the use in some video editor and the compression part was ditched completely. Sadly the links to it are equally dead and were not archived.)

Then there’s Space Ace which is mostly similar but uses an additional mask with sprite telling which backgrounds blocks to update and which to ignore (since they’ll be overwritten anyway). There are two additional annoying features: while the output image is 320×200, background is 328×208 for some reason (for better scrolling?); and the output is vertically interlaced. As the result it stores every fourth column in both background and sprite grouped together, so you’ll have to do some tricks to restore the proper image. Take moving into account and you’ll understand why I gave up and committed my buggy version with the words “eh, good enough”.

Dragon’s Lair seems to be the oldest of them all and uses the same RLE but with different file format. I haven’t even finished figuring out details. Maybe some other day, maybe never.

Ascon videos

This is a format used in some games by ASCARON Entertainment GmbH when it had a slightly shorter name. The format is very peculiar and I’m not talking about German chunk names like BILD or ENDE.

The earlier version codes planar YUV frames (5-bit luma and 8-bit chroma components) either in raw form or as delta with possible run for unchanged area of each component. Additionally, while some YUV codecs (Cinepak and Indeo 3 come to mind) allowed displaying their output as paletted video, here the files start with large PALT chunk specifying both the palette and a way to map 15-bit YUV value to palette index.

Then the format was extended to support 6-bit luma as well and an additional frame format that first codes offset in the reference buffer for each 8×8 block. But that’s not all, there’s a completely new frame compression method that now works in RGB and employs LZSS directly on pixels and using whole frame instead of a limited window.

I still have to figure out some details like special cases for offset wraparound but the decoder should be complete soon.


Discovering such crazy codecs is definitely more interesting than, say, watching AV2 development. I’m yet to see how the console codec (suggested by Paul) works but I don’t expect much from it.

Call for a new container format!

January 17th, 2025

Sometimes I remember that Matroska exists and today I also remembered how it came to existence. Its author proudly admits mixing all the buzzwords of Web 2.0 era like XML, Semantic Web etc etc and coming up with that format. Since we’re in Web 3.0 era we should have something more modern.

That is why I’m calling for a modern multimedia container format to supplant the outdated formats of the old. It should encompass all the features that make the modern Web great:

  • binary JSON as the backbone of the format;
  • central repository for the downloadable descriptions of the parts of the format (but not codecs themselves! Think of it as of MXF specification if it helps);
  • blockchain (as well as clusterchain and framechain);
  • metaverse integration;
  • decentralised storage (so that the container may refer to some data in the cloud as well as on the local disk; even MOV could do something like this);
  • and of course AI!

Some of you may ask where AI can be applied in this scenario. The answer is obvious—transforming input data for better compression (let alone generating metadata or enabling better integration with other Web 3.0 products). A good model should be able to do the same savings like Matroska did by e.g. shaving off common headers bytes from each frame but without special mapping. An excellent model may generate the content by the embedded description instead of transmitting AV2 video. And of course the central repository will contain the description of models and parameters to be used (in addition to the descriptions of better representation of container parts layout). The possibilities are limitless!

Proposals should be sent to the Alliance for Open Media , I have worse things to deal with.

Another fun bit about image formats

January 11th, 2025

Since I have nothing better to do (or the pending stuff makes me thinks I have nothing better to do), I search for a game to look at (preferably with a custom video format with an extension starting with ‘o’, ‘w’ or ‘y’ but any undocumented custom video format would do).

One of those games was Blood & Lace (despite the name it’s not about somebody cutting fingers while trying to put shoes on; whatever—I mostly study games, not play them). It had some videos using Indeo 4 or 5 plus some animations apparently on the game engine. Those were compressed as well as all TGA files. So out of curiosity I looked at the binary specification and apparently most of the files are compressed with LZH (the old compression method with dynamic Huffman coding used in LHA) using custom header starting with JFX1. And TGA files use JGF5 header which is almost the same but has image width and height in it before LZH-compressed data.

That’s another one in my collection of common LZ77-based methods used in multimedia compression, I wonder what will be next.

A quick review of LZ77 matching techniques

December 31st, 2024

You can blame Paul for interesting me enough to spend some time on research. He discussed how he implemented deflate support for librempeg and I got curious myself how other fast LZ77-based encoders do the matching.

The traditional approach is to have a hash chain (i.e. associate each dictionary position with a hash, usually calculated from three symbols, pointing to the next position with the same hash). It’s simple and works surprisingly well. Almost all compressors I looked at used either this approach or a hash map.

The only exception I saw was GLZA which described itself as counting symbol frequencies first and then building partial suffix tree just for the most common symbols.

And finally I encountered the paper by Sadakane and Imai from 2000 where the authors proposed two improvements: introducing the secondary hash for the collisions in the hash chain or, alternatively, using suffix sorting (in a suffix array). They report that using two-level hash improves performance significantly, especially for large dictionary sizes but it’s negligible for 64kB or less, while suffix sorting time seems to grow logarithmic from the dictionary size but it beats one-level hash at dictionary sizes of 256kB and two-level hash at dictionary sizes above 1MB.

So, apparently, this is yet another case when keeping it simple is advantageous. Compressors with larger dictionaries will benefit from a different approach, while deflate has maximum dictionary size at 32kB and anything but brute force search will be about equally good.

In either case, I don’t regret spending time on this quick research.

On the sorry state of opensource multimedia

December 25th, 2024

I’ve been wanting to write this post for a long time, with a focus on the difference between hobby project and product and about NihAV only. But a recent FFdrama made me re-think both the structure and the conclusions.

Apparently there’s another surge of developer’s discontent in jbmpeg for having mushroom treatment (not for the first time and probably not for the last one). IMO they need to realize the project is as free and democratic as Soviet Union and you need simply to agree to the things proposed by the General Secretary (definitely not the leader)—that would save time and nerves of everybody involved. As I wrote countless times before, I do not fear for the future of that project as it can keep such existence indefinitely long and here I’ll try to present my reasons why.

First of all, revolution a la libav won’t work—Michael has learned the lesson and he won’t be kicked out again (not that it really worked in 2011 but now there are no chances for that).

Second, if you split out and form an alternative, it has not so many chances to replace it. And if you decide to write anything from scratch your chances are next to zero. The rest of this post is dedicated to answering why.

Recently I re-read The Mythical Man-Month which tells not only about the author’s experience designing RedHat OS/360 but also presents more general observations and ideas. And right in the beginning he talks about the difference between a program, a programming product, and a programming systems product. Essentially, a program something a programmer writes and that it works for him on his system; a programming product is a program with documentation and support; and a programming system product is that works as a component in a larger system. And moving from one stage to another requires an effort several times larger than the previous one (I’m simplifying a lot and probably misremember something—so you’d better read the original book, it’s a worthy reading anyway).

Here we have a similar situation: writing a tool just to do things for you is straightforward, even I have managed to do it with NihAV; making it into a product requires offering much wider support for different platform configurations (for example, my videoplayer has VA-API hardware decoding enabled by default while it’s not available, say, on Windows and you need to switch that feature off there in order to build it) and different features (e.g. nihav-encoder works for testing encoding per se, but lacks ability to encode input into a good intermediate format supported by other players and encoders). And it gets even worse if you try to make it into a library ready to be used by others—beside the usual things like documentation you’re expected to guarantee some API stability and a certain level of quality. So while I may not care that my app panics/crashes in certain circumstances, it’s significantly less forgivable for a library. And of course achieving such quality level requires a lot of unexciting work on small details. Debugging is even worse.

Suppose you decided to create a fork and work from that. Then you still have a much worse position—you may have the same codebase but there are no killer features you can offer and you don’t have recognition. libav managed to succeed for a while since it was supported by some distribution maintainers—and even then users complained because the de facto brand name was replaced with some unknown thing. And I guesstimate 40% of current jbmpeg developers contribute to it in order to upstream the changes they make while using it in their employer’s product or pipeline. So how can you convince those companies to use your fork instead? And that’s not taking patent situation into account which makes a substantial support from any large company for your project rather improbable.

Good thing I’ve never intended NihAV to be a competition, but what about other projects? rust-av died because of lack of interest (Luca claims that he started it mostly to learn Rust and see how performant it can get—mission accomplished, no further development required). librempeg fares better but I doubt that Paul wants to deal with all the demands that other parties make for the honour of your stuff being included into their distribution (or being used without even a credit).

Another thing that needs to mentioned is that multimedia is no longer an attractive field. Back when I started to dabble in the field, it was rather exciting: there were many different formats around—in active use as well, and people wanted to play them not only with the proprietary players. There were libraries and players supporting only a specific subset of different formats, like avifile or libquicktime or DVD-only player. Nowadays it’s usually a combination of H.26x+AAC in MP4 or VP9/AV1+Opus in WebMKV, all formats having specifications (unless you lack Swiss Francs to pay for the ones from ISO) and new formats are not introduced that often either. Of course, we might have H.267 standardised soon but who uses even H.266? When was the last time you heard AV2 development news? The codec was supposed to be released a couple of years ago, did I miss it along with AV3? Do you remember Ghost audio codec from Xiph? Of course Fraunhofer will keep extending AAC patent lifetime by inventing new formats and naming them like OMGWTFBBQ-AAC but who really cares?

That is why I believe that no matter how dysfunctional jbmpeg is, it will keep existing in this undead state indefinitely long as it’s good enough for the most users and there’s no compelling reason (e.g. new popular formats or radically different ways to process data) to switch to anything else. The only winning move is not to play.

To NIH or not to NIH

December 22nd, 2024

Paul of librempeg fame informs me about his achievements occasionally (and in my turn I try to remind the world time from time that this project exists and may provide you functionality hardly found elsewhere, like various filters or console formats support). And his recent work was implementing inflate routine specifically for the multimedia needs. This made me think if it makes sense to have a custom implementation for deflate decompressor and packer in multimedia project when zlib exists and I think it makes perfect sense. Of course in NihAV I NIHed it because the project concept demands it and it was a nice exercise, but in more serious projects it makes sense too and below I’ll try to explain the reasons.

Zeroth of all, a quick reminder about flexibility. RFC 1951 merely specifies the format, so the implementations can output varying bitstreams that will correctly decompress the data. Back in the day when I worked on it I mentioned how you can compress data in different ways, dedicating more time to achieve better compression. And there are more tricks not mentioned there like parallel compression.

Now, the first reason why to have your own implementation is that you can adapt it to your custom needs. As Paul demonstrated, inflating data into a custom format might be beneficial. If all you need in 99% cases is unpacking data from one buffer into another or to a frame buffer (which has padding so you have to output, say, 27 bytes, then skip 5 bytes, output 27 bytes and so on) you can do without all additional functions and not bother with a sequence of calls to partially inflate data.

Then, there’s a question of consistency. You cannot be sure that a new version of zlib will produce the same output as its previous version (I vaguely remember that there was a small scandal when GitHub releases were re-generated using different deflate implementation and resulted in a lot of headache because of old archive hashes being no longer valid; or there’s a story of some Linux distros replacing zlib with zlib-ng resulting in failed tests; and even “no compression” format may change apparently). The case with liblzma is probably a good demonstration of other reasons why it’s not always wise to rely on third-party components.

And finally, you can not merely adapt interface to your needs, you can tune it to handle your data better too. There’s a reason why there exist compressors targeting e.g. genome data. So when you compress image data, it may be beneficial to search for matches around the position right above the current line first, and the presets for the compression levels may be tuned to the different sets of trade-offs. After all, deflate is often used in screen capture codecs where real-time performance is more important than the compression ratio. But who can imagine people tinkering with the encoder trying to improve its performance in a multimedia project?

I hope this helped to convince you that there are circumstances where NIHing something may prove worthy. As a rule of thumb, if it’s easier to implement yourself than to re-use an existing library then maybe you should do so. That it the right way, and the other way lies left-pad.