Archive for the ‘NihAV’ Category

NihAV Game Tool: the official release

Sunday, July 21st, 2024

I’m finally proud not too ashamed to present a side project I’ve been wasting my time on.

The rationale behind it is simple: I sometimes write throwaway decoders in order to check if I understood format properly or if I really want to see the decoded content. Usually it’s written in C with the same code (usually for dumping output as PPM image sequence or reading 16-bit value) copied over and over again. So I thought to borrow bits from NihAV and finally make a framework for handling output creation and various utilities for handling input (e.g. reading integers of different size and endianness). It’s still better than doing nothing and it may be marginally useful to somebody else.

Not all of the included decoders are completely my own work, some come from ScummVM via the documentation I created for them on The Wiki (but sometimes with improvements, for example they don’t handle compressed sections in PMV) and AV format comes from the Lord of the Rings engine re-implementation (again, via The Wiki).

I should also mention a nasty surprise. Apparently when AVI streams are sufficiently de-synchronised (e.g. the format pre-buffers a second of audio before sending any video frames) libavformat AVI demuxer (which is used by a lot of multimedia tools nowadays) switches to a different mode and treats palette change chunks as normal video data (which works wonders of a stream of course; and apparently nobody had this problem since the time I introduced palette change support there about twenty years ago; also my own player is unaffected). So for such formats I had to introduce manual audio buffering (it’s not nice but the alternatives are worse).

And some words about the releases and release schedule. I don’t want to bother my friend who hosts the public NihAV repositories to add another one and I want to get involved with the usual platforms even less. As the result I’ll simply dump source tarballs with a brief changelog on the site. Releases should happen irregularly, when I accumulate, say, another dozen of formats or have other features implemented (like detecting format by regex instead of just by extension or OpenDML AVI support for being able to output annoyingly large files).

But for now the source code along with some formal Git history is available at a NihAV special page. Grab it while it’s not that stale.

A side project for NihAV

Sunday, July 7th, 2024

Since I still have nothing much to do (and messing with browsers is a bad idea unless the situation is desperate), I decided to make a NihAV-lite project. So announcing na_game_tool.

This is going to be a simple tool to convert various game and image formats (and related) into image sequence, WAV or raw AVI (which then can be played or processed with anything conventional). I’ve begun work on it already but the release will happen when at least when I implement all planned features (which is writing image sequence in BMP format, AVI output and porting two dozen of half-baked decoders I wrote to test if I understood the format).

Why a new project? Because I have nothing better to do, it still may be marginally useful for somebody (e.g. me) and I can do some stuff not fitting into NihAV (for example, decode 3DO version of TrueMotion video split into four files) and I don’t have to bother about other stuff that fits demuxer-decoder paradigm poorly and requires inventing ways to convey format-specific information from the demuxer to the decoder. In my case I simply feed the input name to the input plugin and it returns frames of decoded audio or video data. Some hypothetical Antons might ask a question how to deal with the formats that use variable delay in milliseconds between frames instead (and I’ve implemented one such format already). To which I can reply that one can fit a square peg in a round hole in two ways—by taking a larger hole or by applying more force. The former corresponds to having, say, fixed 1000fps rate and send mostly the same frames just to have constant rate; the latter is forcing a constant framerate and hoping for the best. I chose the latter.

The design is rather simple: there’s a list of input plugins and output plugins. Input plugin takes input name, opens whatever files it needs, outputs information about the streams and then decoded data. Output plugin takes input name, creates whatever files it needs, accepts stream information and then receives and writes frames.

Probably there’s a better alternative with librempeg but you’d better go read about it on Paul’s blog.

ARMovie: trying codec wrappers

Tuesday, May 7th, 2024

I’ve managed to locate two ARMovie samples with video format 600 and 602 (which is M$ Video 1 and Cinepak correspondingly), so I got curious if I can decode them. The problem is that ARMovie stores video data clumped together in a large(r) chunks so you need to parse frame somehow in order to determine its length.

Luckily in this case the wrapped codec data has 16-byte header (first 32-bit word is either frame size or a special code for e.g. palette or a skipped frame, followed by some unknown value and real codec width and height) so packetising data was not that hard. The only problem was that Video 1 stream was 156×128 while the container declared it as 160×128 but after editing the header it worked fine.

Supporting such wrappers actually poses more of a question of design for NihAV—how to link those rather container-specific packetisers to a demuxer. Making demuxer parse all possible formats is rather stupid, my current solution of simply registering global packetiser and hoping there’s no need for another is a bit hacky. I should probably make it namespaced so that the code first probes e.g. “armovie/cinepak” packetiser before “cinepak” one but it’s an additional effort for no apparent gain. Speaking of which, I should probably change code to feed the stream provided by the packetiser to a decoder (instead of always using the one from demuxer) but since I’m lazy I’m not going to do that either.

Anyway, I’m not going to spend more time on ARMovie unless new samples for the formats I don’t support show up (beside newer Eidos Escape codecs which are supported elsewhere already). There are other formats left to look at. For example, I’ve made a good progress with Adorage animation format.

ARMovie: towards NihAV support

Saturday, April 27th, 2024

Since I had nothing better to do I’ve documented various ARMovie codecs in The Wiki. Essentially what’s left is to document newer Moving Blocks variants (and implement a decoder for one with samples), early Eidos Escape codecs (no idea where to find samples for them though). Also there’s Iota Software codec 500 which has no samples but looks like it’s used primarily in ACE Film format so figuring out nuances of LZW compression there may help with yet another obscure format.

From the technical side the most annoying thing is that data is stored in multiple frames in a single chunk without any kind of size, separator or frame end marker being present. The original player simply expected decoders to report number of bytes consumed after decoding a frame. Which means packetisers that should parse the stream in the same way as decoders do and report the frame size—nothing too complex but still annoying (and I had to augment NAPacketiser API to take reported stream details into account, otherwise it has no way to know frame dimensions). The worst one was Moving Blocks parser. The main problem is that unlike most of the formats this one has frames not aligned to 16 bits—while video chunks are—so it has to deal with possible padding byte at the end of chunk.

Another annoying thing I haven’t really dealt with is detecting all those various sound formats from the format string.

I don’t know if I bother adding various raw video formats support but in either case I don’t regret looking at ARMovie. Even simple codecs turned out to be not so simple and sometimes with interesting peculiarities like run of colour pairs and even four-colour pattern painting in what should’ve been a simple RLE codec.

NihAV: nothing left to do

Saturday, November 11th, 2023

If anybody read my previous posts, he might’ve picked a notion about me complaining that there’s nothing left to do for NihAV and it is really a problem I have.

Since the (re)start of the project in 2017 it grew from a small package that could only read bits and bytes to a collection of crates supporting various multimedia formats and a set of tools to use them. I had two principal goals: to experiments with the framework design and learn how various multimedia concepts are implemented and also (ideally) make an independent converter and player so I don’t have to rely on the external projects for most of my multimedia needs.
(more…)

HW accel for NihAV player: fully done

Saturday, October 21st, 2023

As mentioned in the previous post, I’ve managed to make hardware acceleration work with my video player and there was only some polishing left to be done. Now that part is complete as well.

The worst part was forking cros-libva crate. Again, I could do without that but it was too annoying. For starters, it had rather useless dependencies for error handling for the cases that either are too unlikely to happen (e.g. destroying some buffer/config failed) or rather unhelpful (i.e. it may return a detailed error when opening a device has failed but for the rest of operations it’s rather unhelpful “VA-API error N” with an optional error explanation if libva bothered to provide it). I’ve switched it to enums because e.g. VAError::UnsupportedEntrypoint is easier to handle and understand when you actually care about return error codes.

The other annoying part was all the bindgen-produced enumerations (and flags). For example, surface allocation is done with:

display.create_surfaces(
                bindings::constants::VA_RT_FORMAT_YUV420,
                None, width, height,
                Some(UsageHint::USAGE_HINT_DECODER), 1)

In my slightly cleaned version it now looks like this:

display.create_surfaces(
                RTFormat::YUV420,
                None, width, height,
                Some(UsageHint::Decoder.into()), 1)

In addition to less typing it gives better argument type check: in some places you use both VA_RT_FORMAT_ and VA_FOURCC_ values and they are quite easy to mix up (because they describe about the same thing and stored as 32-bit integer). VAFourcc and RTFormat are distinct enough even if they get cast back to u32 internally.

And finally, I don’t like libva init info being printed every time a new display is created (which happens every time when new H.264 file is played in my case) so I added a version of the function that does not print it at all.

But if you wonder why fork it instead of improving the upstream, beside the obvious considerations (I forked off version 0.0.3, they’re working on 0.0.5 already with many underlying thing being different already), there’s also CONTRIBUTING.md that outright tells you to sign Contributor License Agreement (no thanks) that would also require to use their account (which was so inconvenient for me that I’ve moved from it over a year ago). At least the license does not forbid creating your own fork—which I did, mentioning the original authorship and source in two or three places and preserving the original 3-clause BSD license.

But enough about it, there’s another fun thing left to be discussed. After I’ve completed the work I also tried it on my other laptop (also with Intel® “I can’t believe it’s not GPU”, half a decade newer but still with slim chances to get hardware-accelerated decoding via Vulkan API on Linux in the near future). Surprisingly the decoding was slower than software decoder again but for a different reason this time.

Apparently accessing decoded surfaces is slow and it’s better to leave processing and displaying them to GPU as well (or offload them into main memory in advance) but that would require too many changes in my player/decoder design. Also Rust could not optimise chroma deinterleaving code for chroma (in NV12 to planar YUV conversion) and loads/stores data byte-by-byte which is extremely slow on my newer laptop. Thus I quickly wrote a simply SSE assembly to deinterleave data reading 32 bytes at once and it works many times faster. So it’s good enough and I’m drawing a line.

So while this has been rather useful experience, it was not that fun and I’d rather not return to it. I should probably go and reverse engineer some obscure codec instead, I haven’t done that for long enough.

Hardware acceleration for NihAV video player

Wednesday, October 18th, 2023

Since I was not fully satisfied with the CPU load from my H.264 decoder (and optimising it further is too tedious), I decided to take a look at VA-API hardware accelerated decoding once again (no Vulkan for me).

It turned out that documentation is not as lacking as I expected it to be, it’s just most of it was eaten by bindgen so e.g. you can get VAImage from the decoded surface but you have to look into source code for its definition because it’s just an alias for semi-hidden _VAImage. And even if you look at the original header files from libva, that documentation is rather scarce anyway.
(more…)

Bink video encoder tricks

Sunday, October 8th, 2023

As I mentioned in the introductory post, there are nine block coding modes and my encoder tries them all to see which is good (or good enough) to be used. What I have not mentioned is that some of those blocks have sixteen variations (quantisers for DCT-based blocks and scan patterns for RLE blocks), which makes search even longer.

First of all, I use the rather obvious approach to trying blocks: order them in a sequence, motion blocks types first, then simple ones (fill, two-colour pattern, RLE) with intra DCT and raw being the last. And if the block metric is low enough then the block is good enough and the rest of the modes should not be tried.

Second, there are two encoding modes: quality-based and bitrate-based. For bitrate-based mode I simply manipulate lambda ratio between block distortion and bits and that’s it. For quality mode I actually collected statistics on different files at different quality settings to see what block types are used more or less. I.e. on the highest quality setting intra blocks are not used at all while on low quality settings you don’t see lossless residue or raw blocks.

So I simply used the option to disable different block coding modes (introduced to make debugging other coding modes easier) and modified the list depending on quality setting.

Then I went even further and observed the statistics of the DCT block quantisers used depending on quality settings. As one could reasonably expect, low quality setting resulting in quantisers 12-15 (and rarely 11) while high quality setting used quantisers 0-10 the most. So limiting the quantisers depending on quality was the next logical step.

And here are some other tricks:

  • on lower quality levels I use non-zero threshold for RLE block so that e.g. a sequence 4, 3, 5, 4 will be treated as a run of fours;
  • in the previous version of the encoder I used Block Truncation Coding (probably the only possible application of it even if a bit unwise), now I’m simply calculating averages of the values above/below block mean value;
  • in rate-distortion metric I scale the bits value, doing otherwise often leads to the essentially free skip block being selected almost every time and blocky moving pictures are better than slightly less blocky still one.

Of course it is easy to come with more but it’s been enough for me.

Bink encoding: format and encoder designs

Friday, October 6th, 2023

Here I’m going to give a brief review of how Bink formats (container, video and audio codecs) are designed and how it affected the overall encoder design in NihAV.
(more…)

NihAV: now with GIF support

Monday, September 11th, 2023

One would wonder why. The answer is simple: I merely wanted to play with LZW encoding and created a GIF encoder. Then I needed to debug it and thus ended up with a decoder as well. And while I was at it I added animated GIF support for both decoding and encoding.

I’m not going to repeat on how it works, it’s been written in all possible textbooks and tutorials on data compression (and then some), I’m more interested in implementing an efficient encoder for it. I should remind you that LZW compression was patented (at least by Sperry and IBM) and that patent introduced a lot of chaos in multimedia before MP3 or H.264 were a thing.

The algorithm simply tells you to find a match for the input string in the dictionary and add a new string to the dictionary. You can keep the actual strings in the dictionary (simple but not very effective), you can keep the reference to it in the input (simple and saves a lot on dictionary space but not very effective and you can’t feed it input by chunks), you can keep the same dictionary structure as the decoder and try to match input to the code sequence (the most compact way to store the dictionary but not so straightforward to use; more on a practical extension of this approach at the end of this post). Since I do not care about wasting whopping two megabytes of RAM on the dictionary, I implemented a simple trie: each node contains a code ending at that node and 256 pointers to the next next trie nodes (each code/pointer fits into 16-bit integer with zero as “invalid value” pointer). Of course it can be made more compact but I prefer simplicity and clarity.

Now, the question is whether the compression efficiency can be improved somehow. Since the decoder expects encoder to work in a certain manner we can’t change format. The tricks that work good for LZ77 like lazy matching do not work that great for LZ78: you have dictionary state changed at each step and you don’t know which entry will be more useful in the future. Also emitting shorter matches means we also litter the dictionary with duplicates, which does not help compression in the long run either.

Thus the only thing we can use is creative dictionary clearing. E.g. we can emit reset code after every N symbols encoded to keep the code length constant (that’s the trick various libraries used to produce the compliant stream without doing any compression and thus not infringing the patent), or you can keep the dictionary filled with what it was until the very end (and if data properties change your compression ratio hurts), or (as most encoders do) you simply reset the dictionary after it gets full. But can you do better?

I tried two heuristics. First, I tried segmenting data by having a sliding window and comparing the character occurrence in each half, if those differed by a lot then it was a good place to restart. From what I heard, this simple approach allows better compression for LZ77- and BWT-based compression schemes. In this case it helped only when compressing the data that could not be compressed at all (e.g. greyscale continuous-tone images), in other cases the results were worse than with the default strategy. The second approach was simple: keep the rolling average of last, say, sixteen match lengths and if it drops significantly when the dictionary is full then it’s time to clear it. Surprisingly enough it seemed to improve compression for several fractions of a percent in general cases (and sometimes worsened the ratio on hard-to-compress files by about the same amount) so I kept it.


And while talking about GIF and LZW, here are some words about FFmpeg support of GIF: it’s rather laughable. While encoding to GIF has been supported from the early days (from 2002, to be less vague), actual LZW compression support was introduced only in 2009 (using a work of some Summer of Code student from 2007; the notorious patent on LZW compression had expired by that time). A bit later in the same year the encoder was improved to work with paletted input instead of RGB24 (which was maimed to use fixed palette anyway). In the same time the original animated GIF encoder+muxer (not to be confused with now improved single-image GIF encoder) kept existing with all the original deficiencies (RGB24 input, fixed palette, no compression) so by encoding to .gif without specifying image2 muxer you’d get significantly worse results. I heard it’d been improved at last back in 2018 but I have not been caring before so why start now?

The interesting thing there is that LZW compression employed for both GIF and TIFF seems to be based on compress approach with a hash table. Essentially the dictionary is stored in the same form as in the decoder (a reference to the previous code plus suffix character) but indices are hash codes instead of sequential entries and the encoder in a decoder mirror mode: it traverses the table to find the code that corresponds to the current code plus a new character (and if it’s not found, the prefix code is output and new code pair is added to the dictionary). Also it seems to compress data slightly worse than the straightforward implementation in some cases (e.g. in my tests one 320×240 PGM file got expanded to 83159 bytes instead of 83132 bytes and another 112×288 PPM file got compressed to 10371 bytes instead of 10366 bytes; the difference is negligible but it’s there). I suspect this is caused by hash collisions i.e. in some rare cases a shorter code and a longer one have the same hash and the former gets selected even when the latter should be better.


In either case, this might have been not a very useful exercise but it was fun and somewhat educational too. Now only to find a thing to do next…