Archive for the ‘NihAV’ Category

NihAV: nothing left to do

Saturday, November 11th, 2023

If anybody read my previous posts, he might’ve picked a notion about me complaining that there’s nothing left to do for NihAV and it is really a problem I have.

Since the (re)start of the project in 2017 it grew from a small package that could only read bits and bytes to a collection of crates supporting various multimedia formats and a set of tools to use them. I had two principal goals: to experiments with the framework design and learn how various multimedia concepts are implemented and also (ideally) make an independent converter and player so I don’t have to rely on the external projects for most of my multimedia needs.
(more…)

HW accel for NihAV player: fully done

Saturday, October 21st, 2023

As mentioned in the previous post, I’ve managed to make hardware acceleration work with my video player and there was only some polishing left to be done. Now that part is complete as well.

The worst part was forking cros-libva crate. Again, I could do without that but it was too annoying. For starters, it had rather useless dependencies for error handling for the cases that either are too unlikely to happen (e.g. destroying some buffer/config failed) or rather unhelpful (i.e. it may return a detailed error when opening a device has failed but for the rest of operations it’s rather unhelpful “VA-API error N” with an optional error explanation if libva bothered to provide it). I’ve switched it to enums because e.g. VAError::UnsupportedEntrypoint is easier to handle and understand when you actually care about return error codes.

The other annoying part was all the bindgen-produced enumerations (and flags). For example, surface allocation is done with:

display.create_surfaces(
                bindings::constants::VA_RT_FORMAT_YUV420,
                None, width, height,
                Some(UsageHint::USAGE_HINT_DECODER), 1)

In my slightly cleaned version it now looks like this:

display.create_surfaces(
                RTFormat::YUV420,
                None, width, height,
                Some(UsageHint::Decoder.into()), 1)

In addition to less typing it gives better argument type check: in some places you use both VA_RT_FORMAT_ and VA_FOURCC_ values and they are quite easy to mix up (because they describe about the same thing and stored as 32-bit integer). VAFourcc and RTFormat are distinct enough even if they get cast back to u32 internally.

And finally, I don’t like libva init info being printed every time a new display is created (which happens every time when new H.264 file is played in my case) so I added a version of the function that does not print it at all.

But if you wonder why fork it instead of improving the upstream, beside the obvious considerations (I forked off version 0.0.3, they’re working on 0.0.5 already with many underlying thing being different already), there’s also CONTRIBUTING.md that outright tells you to sign Contributor License Agreement (no thanks) that would also require to use their account (which was so inconvenient for me that I’ve moved from it over a year ago). At least the license does not forbid creating your own fork—which I did, mentioning the original authorship and source in two or three places and preserving the original 3-clause BSD license.

But enough about it, there’s another fun thing left to be discussed. After I’ve completed the work I also tried it on my other laptop (also with Intel® “I can’t believe it’s not GPU”, half a decade newer but still with slim chances to get hardware-accelerated decoding via Vulkan API on Linux in the near future). Surprisingly the decoding was slower than software decoder again but for a different reason this time.

Apparently accessing decoded surfaces is slow and it’s better to leave processing and displaying them to GPU as well (or offload them into main memory in advance) but that would require too many changes in my player/decoder design. Also Rust could not optimise chroma deinterleaving code for chroma (in NV12 to planar YUV conversion) and loads/stores data byte-by-byte which is extremely slow on my newer laptop. Thus I quickly wrote a simply SSE assembly to deinterleave data reading 32 bytes at once and it works many times faster. So it’s good enough and I’m drawing a line.

So while this has been rather useful experience, it was not that fun and I’d rather not return to it. I should probably go and reverse engineer some obscure codec instead, I haven’t done that for long enough.

Hardware acceleration for NihAV video player

Wednesday, October 18th, 2023

Since I was not fully satisfied with the CPU load from my H.264 decoder (and optimising it further is too tedious), I decided to take a look at VA-API hardware accelerated decoding once again (no Vulkan for me).

It turned out that documentation is not as lacking as I expected it to be, it’s just most of it was eaten by bindgen so e.g. you can get VAImage from the decoded surface but you have to look into source code for its definition because it’s just an alias for semi-hidden _VAImage. And even if you look at the original header files from libva, that documentation is rather scarce anyway.
(more…)

Bink video encoder tricks

Sunday, October 8th, 2023

As I mentioned in the introductory post, there are nine block coding modes and my encoder tries them all to see which is good (or good enough) to be used. What I have not mentioned is that some of those blocks have sixteen variations (quantisers for DCT-based blocks and scan patterns for RLE blocks), which makes search even longer.

First of all, I use the rather obvious approach to trying blocks: order them in a sequence, motion blocks types first, then simple ones (fill, two-colour pattern, RLE) with intra DCT and raw being the last. And if the block metric is low enough then the block is good enough and the rest of the modes should not be tried.

Second, there are two encoding modes: quality-based and bitrate-based. For bitrate-based mode I simply manipulate lambda ratio between block distortion and bits and that’s it. For quality mode I actually collected statistics on different files at different quality settings to see what block types are used more or less. I.e. on the highest quality setting intra blocks are not used at all while on low quality settings you don’t see lossless residue or raw blocks.

So I simply used the option to disable different block coding modes (introduced to make debugging other coding modes easier) and modified the list depending on quality setting.

Then I went even further and observed the statistics of the DCT block quantisers used depending on quality settings. As one could reasonably expect, low quality setting resulting in quantisers 12-15 (and rarely 11) while high quality setting used quantisers 0-10 the most. So limiting the quantisers depending on quality was the next logical step.

And here are some other tricks:

  • on lower quality levels I use non-zero threshold for RLE block so that e.g. a sequence 4, 3, 5, 4 will be treated as a run of fours;
  • in the previous version of the encoder I used Block Truncation Coding (probably the only possible application of it even if a bit unwise), now I’m simply calculating averages of the values above/below block mean value;
  • in rate-distortion metric I scale the bits value, doing otherwise often leads to the essentially free skip block being selected almost every time and blocky moving pictures are better than slightly less blocky still one.

Of course it is easy to come with more but it’s been enough for me.

Bink encoding: format and encoder designs

Friday, October 6th, 2023

Here I’m going to give a brief review of how Bink formats (container, video and audio codecs) are designed and how it affected the overall encoder design in NihAV.
(more…)

NihAV: now with GIF support

Monday, September 11th, 2023

One would wonder why. The answer is simple: I merely wanted to play with LZW encoding and created a GIF encoder. Then I needed to debug it and thus ended up with a decoder as well. And while I was at it I added animated GIF support for both decoding and encoding.

I’m not going to repeat on how it works, it’s been written in all possible textbooks and tutorials on data compression (and then some), I’m more interested in implementing an efficient encoder for it. I should remind you that LZW compression was patented (at least by Sperry and IBM) and that patent introduced a lot of chaos in multimedia before MP3 or H.264 were a thing.

The algorithm simply tells you to find a match for the input string in the dictionary and add a new string to the dictionary. You can keep the actual strings in the dictionary (simple but not very effective), you can keep the reference to it in the input (simple and saves a lot on dictionary space but not very effective and you can’t feed it input by chunks), you can keep the same dictionary structure as the decoder and try to match input to the code sequence (the most compact way to store the dictionary but not so straightforward to use; more on a practical extension of this approach at the end of this post). Since I do not care about wasting whopping two megabytes of RAM on the dictionary, I implemented a simple trie: each node contains a code ending at that node and 256 pointers to the next next trie nodes (each code/pointer fits into 16-bit integer with zero as “invalid value” pointer). Of course it can be made more compact but I prefer simplicity and clarity.

Now, the question is whether the compression efficiency can be improved somehow. Since the decoder expects encoder to work in a certain manner we can’t change format. The tricks that work good for LZ77 like lazy matching do not work that great for LZ78: you have dictionary state changed at each step and you don’t know which entry will be more useful in the future. Also emitting shorter matches means we also litter the dictionary with duplicates, which does not help compression in the long run either.

Thus the only thing we can use is creative dictionary clearing. E.g. we can emit reset code after every N symbols encoded to keep the code length constant (that’s the trick various libraries used to produce the compliant stream without doing any compression and thus not infringing the patent), or you can keep the dictionary filled with what it was until the very end (and if data properties change your compression ratio hurts), or (as most encoders do) you simply reset the dictionary after it gets full. But can you do better?

I tried two heuristics. First, I tried segmenting data by having a sliding window and comparing the character occurrence in each half, if those differed by a lot then it was a good place to restart. From what I heard, this simple approach allows better compression for LZ77- and BWT-based compression schemes. In this case it helped only when compressing the data that could not be compressed at all (e.g. greyscale continuous-tone images), in other cases the results were worse than with the default strategy. The second approach was simple: keep the rolling average of last, say, sixteen match lengths and if it drops significantly when the dictionary is full then it’s time to clear it. Surprisingly enough it seemed to improve compression for several fractions of a percent in general cases (and sometimes worsened the ratio on hard-to-compress files by about the same amount) so I kept it.


And while talking about GIF and LZW, here are some words about FFmpeg support of GIF: it’s rather laughable. While encoding to GIF has been supported from the early days (from 2002, to be less vague), actual LZW compression support was introduced only in 2009 (using a work of some Summer of Code student from 2007; the notorious patent on LZW compression had expired by that time). A bit later in the same year the encoder was improved to work with paletted input instead of RGB24 (which was maimed to use fixed palette anyway). In the same time the original animated GIF encoder+muxer (not to be confused with now improved single-image GIF encoder) kept existing with all the original deficiencies (RGB24 input, fixed palette, no compression) so by encoding to .gif without specifying image2 muxer you’d get significantly worse results. I heard it’d been improved at last back in 2018 but I have not been caring before so why start now?

The interesting thing there is that LZW compression employed for both GIF and TIFF seems to be based on compress approach with a hash table. Essentially the dictionary is stored in the same form as in the decoder (a reference to the previous code plus suffix character) but indices are hash codes instead of sequential entries and the encoder in a decoder mirror mode: it traverses the table to find the code that corresponds to the current code plus a new character (and if it’s not found, the prefix code is output and new code pair is added to the dictionary). Also it seems to compress data slightly worse than the straightforward implementation in some cases (e.g. in my tests one 320×240 PGM file got expanded to 83159 bytes instead of 83132 bytes and another 112×288 PPM file got compressed to 10371 bytes instead of 10366 bytes; the difference is negligible but it’s there). I suspect this is caused by hash collisions i.e. in some rare cases a shorter code and a longer one have the same hash and the former gets selected even when the latter should be better.


In either case, this might have been not a very useful exercise but it was fun and somewhat educational too. Now only to find a thing to do next…

NihAV: adding SGA support

Saturday, September 2nd, 2023

Since I had nothing better to do this week I decided to finally add Digital Pictures SGA decoding support to NihAV. While there are many different formats described in The Wiki, I’ve decided to play only those not described there (namely $81/$8A, $85, $86 and $89).

In my previous post on this matter I mentioned that the formats I took interest in are using 8×8 tiles that may be subdivided into 8×4 or 4×4 parts and filled with several colours using a predefined pattern (or an arbitrary one for 8×8 tile if requested) plus some bits to select one of two possible colours for each tile pixel. The main difference between $81/$8A scheme and the others is that it codes all data in the same bitstream while the later versions split colours and opcode+pattern bits into two separate partitions (maybe they had plans for compressing it?) plus they store audio data inside the frame.

And here are some notes on the games (I think most of those are PC or Macintosh ports but it’s possible the same files were used in console versions of some of those games as well):

  • Double Switch—this one uses $81 compression (in still images, cutscenes embed them along with $A2 audio in $F1 chunks);
  • Quarterback Attack—this one uses $8A compression in $F9 chunks;
  • Night Trap$85 compression and megafiles (i.e. almost all cutscenes are stored in single NTMOVIE file that require some external index to access them). Also the PC release had a short documentary about the moral panic around that game (in the same format of course; in two resolutions even);
  • Corpse Killer$86 compression and one megafile for all cutscenes;
  • Supreme Warrior$89 compression, one megafile and no frame dimensions given. For most of the cutscenes it’s 256×160 but at the end (logo and maybe something else) it’s different. Additionally there are two audio tracks: some audio chunks contain twice as much data (and have high bit of size set), in that case the first half corresponds to English speech and the second half is Chinese; otherwise it’s the same for both versions (e.g. background music, fighters grunting, sound effects and so on).

Overall, it was an interesting experience even if I don’t care about the games themselves.

H.264 decoder postmortem

Sunday, August 27th, 2023

I mentioned before couple of times that NihAV has its own functioning H.264 decoder. And after my failed attempts to use hardware accelerated decoding instead, I spend some time trying to optimise it but eventually gave up. On one hand it’s fast enough for my needs, on the other hand it’s too tedious to optimise it further (even if I can spare time on it, I’d rather not).

To put it into perspective, initially it was about three times slower than libavcodec one without SIMD optimisations, now it’s only about two times slower (with SIMD turned on it’s about five times as slow, feel free to laugh at me). But at the same time playing 720p content (and I have next to no files with larger resolution) in multi-threading mode takes 20-25% of the core so it’s not that bad.

So how the cycles are wasted and is there a potential for serious optimisation?
(more…)

NihAV: giving up on hardware acceleration

Thursday, August 3rd, 2023

After having several attempts on trying to add hardware-accelerated decoding support for NihAV I’m giving up, the reason being the sorry state of it in general.

I’m aware of two major APIs for hardware-accelerated video decoding for Linux, those are VDPAU and VA-API. Plus there are some specific toolkits e.g. from Intel but from what I remember those are even more complicated.

So, VDPAU has only bare-bone documentation without actual explanation what is expected for each codec in order to decode it. VA-API turned out to be even worse: it points out to 01.org for documentation which no longer exists (and redirects to some Intel’s page blurbing how great they are at open source). And web.archive.org shows that that page essentially contained a link to libva and libva-utils repositories plus some references to the projects that have VA-API support implemented. “…so shut up and go away” was not written but implied.

At least VA-API has three crates implementing its bindings in Rust and not just one not updated in four years like VDPAU but how usable are those? There’s FeV that seems to support JPEG decoding only (and has a strict warning against AMD GPUs), there’s libva-sys that is a pile of auto-generated bindings and there’s cros-libva. The latter seems to be the cleanest one and most actively developed (too actively developed to my taste as it changes base APIs every couple of months). Unfortunately it’s still not exactly clear how to use it for H.264 decoding (and the cros-codecs crate provides equally confusing API). And the final straw is that it seems to be intended for single-thread use only by design, which means it’s not possible to use with my library (e.g. my player uses separate threads for audio and video decoding, so I can’t use the standard decoder interface for hardware-accelerated decoding without some ugly hacks).

Oh well, I’ll work on improving my own H.264 decoder performance—while it’s not much fun either at least it’s clear what I can do with it and how it can be done.

P.S. This reminds me of the situation with ALSA. From what I heard it’s one of the worst documented subsystems in Linux with too flexible interface, to the point that it took help from ALSA developers to make at least MPlayer support ALSA output. The most probable reason is that it’s common to smoke weed in Czechia (where ALSA was developed), but what is the excuse for the other libraries?

Why I work on NihAV

Sunday, July 30th, 2023

I started NihAV as a more or less toy project to play with different concepts and try new stuff like finding out how vector quantisation works or attempting to write an encoder. Having enough experience with libavcodec and libavformat, I did not want to touch them again (and still don’t) and there was a hope that rust-av will provide a viable albeit limited alternative for multimedia playback (it still hasn’t). In theory I’ve achieved my original goals—NihAV supports decoding a lot of exotic formats (some of which are not handled by any other open-source project), it even has some encoders and its own transcoder tool and there’s even two players (one for audio files, another one can also play videos). So I could relax and do something else entirely but yet I’m working on adding new features to NihAV that take a lot of effort and do not bring me joy. Why?

(more…)