Archive for the ‘NihAV’ Category

NihAV: adding SGA support

Saturday, September 2nd, 2023

Since I had nothing better to do this week I decided to finally add Digital Pictures SGA decoding support to NihAV. While there are many different formats described in The Wiki, I’ve decided to play only those not described there (namely $81/$8A, $85, $86 and $89).

In my previous post on this matter I mentioned that the formats I took interest in are using 8×8 tiles that may be subdivided into 8×4 or 4×4 parts and filled with several colours using a predefined pattern (or an arbitrary one for 8×8 tile if requested) plus some bits to select one of two possible colours for each tile pixel. The main difference between $81/$8A scheme and the others is that it codes all data in the same bitstream while the later versions split colours and opcode+pattern bits into two separate partitions (maybe they had plans for compressing it?) plus they store audio data inside the frame.

And here are some notes on the games (I think most of those are PC or Macintosh ports but it’s possible the same files were used in console versions of some of those games as well):

  • Double Switch—this one uses $81 compression (in still images, cutscenes embed them along with $A2 audio in $F1 chunks);
  • Quarterback Attack—this one uses $8A compression in $F9 chunks;
  • Night Trap$85 compression and megafiles (i.e. almost all cutscenes are stored in single NTMOVIE file that require some external index to access them). Also the PC release had a short documentary about the moral panic around that game (in the same format of course; in two resolutions even);
  • Corpse Killer$86 compression and one megafile for all cutscenes;
  • Supreme Warrior$89 compression, one megafile and no frame dimensions given. For most of the cutscenes it’s 256×160 but at the end (logo and maybe something else) it’s different. Additionally there are two audio tracks: some audio chunks contain twice as much data (and have high bit of size set), in that case the first half corresponds to English speech and the second half is Chinese; otherwise it’s the same for both versions (e.g. background music, fighters grunting, sound effects and so on).

Overall, it was an interesting experience even if I don’t care about the games themselves.

H.264 decoder postmortem

Sunday, August 27th, 2023

I mentioned before couple of times that NihAV has its own functioning H.264 decoder. And after my failed attempts to use hardware accelerated decoding instead, I spend some time trying to optimise it but eventually gave up. On one hand it’s fast enough for my needs, on the other hand it’s too tedious to optimise it further (even if I can spare time on it, I’d rather not).

To put it into perspective, initially it was about three times slower than libavcodec one without SIMD optimisations, now it’s only about two times slower (with SIMD turned on it’s about five times as slow, feel free to laugh at me). But at the same time playing 720p content (and I have next to no files with larger resolution) in multi-threading mode takes 20-25% of the core so it’s not that bad.

So how the cycles are wasted and is there a potential for serious optimisation?
(more…)

NihAV: giving up on hardware acceleration

Thursday, August 3rd, 2023

After having several attempts on trying to add hardware-accelerated decoding support for NihAV I’m giving up, the reason being the sorry state of it in general.

I’m aware of two major APIs for hardware-accelerated video decoding for Linux, those are VDPAU and VA-API. Plus there are some specific toolkits e.g. from Intel but from what I remember those are even more complicated.

So, VDPAU has only bare-bone documentation without actual explanation what is expected for each codec in order to decode it. VA-API turned out to be even worse: it points out to 01.org for documentation which no longer exists (and redirects to some Intel’s page blurbing how great they are at open source). And web.archive.org shows that that page essentially contained a link to libva and libva-utils repositories plus some references to the projects that have VA-API support implemented. “…so shut up and go away” was not written but implied.

At least VA-API has three crates implementing its bindings in Rust and not just one not updated in four years like VDPAU but how usable are those? There’s FeV that seems to support JPEG decoding only (and has a strict warning against AMD GPUs), there’s libva-sys that is a pile of auto-generated bindings and there’s cros-libva. The latter seems to be the cleanest one and most actively developed (too actively developed to my taste as it changes base APIs every couple of months). Unfortunately it’s still not exactly clear how to use it for H.264 decoding (and the cros-codecs crate provides equally confusing API). And the final straw is that it seems to be intended for single-thread use only by design, which means it’s not possible to use with my library (e.g. my player uses separate threads for audio and video decoding, so I can’t use the standard decoder interface for hardware-accelerated decoding without some ugly hacks).

Oh well, I’ll work on improving my own H.264 decoder performance—while it’s not much fun either at least it’s clear what I can do with it and how it can be done.

P.S. This reminds me of the situation with ALSA. From what I heard it’s one of the worst documented subsystems in Linux with too flexible interface, to the point that it took help from ALSA developers to make at least MPlayer support ALSA output. The most probable reason is that it’s common to smoke weed in Czechia (where ALSA was developed), but what is the excuse for the other libraries?

Why I work on NihAV

Sunday, July 30th, 2023

I started NihAV as a more or less toy project to play with different concepts and try new stuff like finding out how vector quantisation works or attempting to write an encoder. Having enough experience with libavcodec and libavformat, I did not want to touch them again (and still don’t) and there was a hope that rust-av will provide a viable albeit limited alternative for multimedia playback (it still hasn’t). In theory I’ve achieved my original goals—NihAV supports decoding a lot of exotic formats (some of which are not handled by any other open-source project), it even has some encoders and its own transcoder tool and there’s even two players (one for audio files, another one can also play videos). So I could relax and do something else entirely but yet I’m working on adding new features to NihAV that take a lot of effort and do not bring me joy. Why?

(more…)

NihAV: updated for Rust 1.69

Thursday, July 27th, 2023

Since I had nothing better to do I decided to optimise my H.264 decoder a bit more, and that required a rather recent version of rustc that supports sym construct in asm!{} (so I can reference data tables in the inline assembly). Why this specific version though? I picked whatever was both recent enough to support the aforementioned feature (and older version had multiple micro version releases which hints on some problems with them) and not too recent either (again, I’m no beta tester of the compiler and I don’t need other shiny features).

And while at it I decided to make the code a bit more up to date. cargo-clippy is still annoying with its default warning about all-caps names and some lints that changed names and their suppressors no longer work. Getting rid of some leftover hints for the old versions of the compiler (like explicit drop()s for the objects borrowing code and some type hints) was nice though. Inline assembly is still only halfway done, especially considering that using const in it won’t be possible in stable for a long time and sym sucks compared to GCC inline assembly (it provides just a symbol name and you should magically know for yourself how the target platform works in order to make it possible to load it correctly; on AMD64 it’s rather simple but on aarch64 and on 32-bit ARMs that depends on target OS and PIC mode). Who would’ve thought that assembly may be platform-dependent! Looks like the current solution to that problem is to expose current configuration to the user so it’s up to you to check all environment variables and write the appropriate code. And of course even that solution will be available some time in the future since the developers haven’t thought about it at all.

Anyway, now my H.264 decoder features some more assembly optimisations and decodes video even faster than before. Though I fear it still takes too much CPU for the comfortable playback of my typical content so I’ll have to dabble in the hardware video acceleration. NihAV is a learning project after all.

NihAV: another step to being self-sufficient

Tuesday, June 13th, 2023

I’ve mentioned previously that I played with my H.264 decoder trying to make it multi-threaded. Now I went a bit further and plugged it into my video player. So now instead of hopelessly lagging on 720p video it can play it in real time just fine—so after improving my player even further (and enabling assembly optimisations when Rust compiler is good enough for that) I can use it to play most of the videos I care about without resorting to the external decoders or players. And in theory using it more will lead to fixing and polishing it more thus forming a stable loop.

Anyway, the code is not public yet as I hacked this new encoder in a separate crate and I still need to merge it back and clean up a bit, but I’d like to describe the interfaces and my reasons behind them.

So, multi-threaded decoder has a separate interface (for obvious reasons). I thought about writing a wrapper for single-threaded decoders to behave like multi-threaded ones but decided against it (at least for now). NADecoderMT has the following methods:

  • init()—initialises the decoder. One of the parameters is number of threads to use. IMO it’s the caller that decides how many threads it can spare as the decoder does not know what will be done in parallel (maybe there’s another multi-threaded decoder or two are running);
  • can_take_input()—queries if the decoder is ready to queue the next frame for decoding. Of course you can call queue_pkt() and check if it accepted the input but it may not always be desired (e.g. if we need to retrieve an input packet and then hold it waiting until the decoder is ready to accept it);
  • queue_pkt()—tries to queue the next frame for decoding;
  • has_output()—checks if the decoder has produced some frames for the output. Since get_frame() is waiting for a frame to be decoded this function is necessary unless you want to block the thread calling the decoder;
  • get_frame()—waits until at least one frame is decoded and returns it (or a special error if there are no frames to be decoded);
  • flush()—stops decoding all frames and clears the state (e.g. after seek).

Another peculiarity of this decoder interface is that it operates on pairs of a frame and its sequential number. The reason is simple: you get decoded frames out of order so you need to distinguish them somehow (and in case of a decoding error we need to know which frame caused it).

This also leads to a special frame reorder mechanism for such codecs. I’ve created MTFrameReorderer that requires you to “register” frame for decoding (providing you with an ID that is fed to the decoder along with frame data) and to “unregister” frame on error (that’s one of the places where returned frame ID comes in handy). Unfortunately it’s not possible to create a generic reorderer that would a) work completely codec-agnostic b) not require a whole file (or an indefinitely long sequence of frames) to be buffered before output and c) produce monotone increasing sequence of frames. Considering how H.264 has no real concept of frames and can build a pyramid of referenced frames adding layer by layer (and mind you, some frames may have an error during decoding and thus not present in output). I simply gave up and made a heuristic that checks if we have enough initial frames decoded and outputs some of them if it’s possible. At least it seems to work rather fine on the conformance suite (except for a couple of specially crafter files but oh well).

Maybe in the future I’ll try more multi-threaded decoders but for now even one decoder is enough, especially such practical one. Still, I need to find something more interesting to do.

Further Cinepak experiments

Monday, June 5th, 2023

For having nothing better to do I kept experimenting with Cinepak encoder.

I considered implementing some variant of codebook decomposition scheme suggested by Tomas in the comments to the previous post but I’m still not sure if I should bother even if it looks promising. So I tried the old thresholds-based scheme instead.

And what do you know, it speeds things up considerably: my usual test sample gets encoded in 27-35 seconds (depending on thresholds) instead of 44 seconds in the usual mode. But since I don’t know what would be good thresholds I did the opposite and added a refinement mode: after deciding which codebook to use for which block I re-generate codebook using only those blocks that belong to it. Of course it increases processing time, for example that file it takes 75 seconds to encode with refinement—which is still 70% more time but still less than double (for comparison, in full ELBG mode it’s an increase from about 160 seconds to 270 seconds).

So by rough estimate selecting only relevant blocks for codebook generation shaves 20-40% off the encoding time. And splitting data into partitions and generating a codebook by parts made the process about three times faster. I suspect that with a proper approach to clustering vector quantisation can be made two-three times faster but I don’t think I want to experiment with that. I should call it a day and move to something else instead.

Quick experiments with Cinepak encoder vector quantisation

Saturday, June 3rd, 2023

Out of curiosity I decided to check how partitioning input before creating a codebook affects encoding speed. So I’ve added a mode to Cinepak encoder that partitions vectors by luma variance and creates a part of common codebook just for them. The other two modes are median cut (the simplest one but also with mediocre output) and ELBG (that uses median cut to create the initial codebook—also if it’s not full that means we have all possible entries and do not need to perform ELBG at all).

Here are rough results on encoding several different files (and using different number of strips): median cut worked for 11-14 seconds, ELBG took 110-160 seconds, new mode (I decided to call it fast) takes 43-62 seconds. I think even such approximate numbers speak for themselves. Also there’s an interesting side effect: because of the partitioning it tends to produce smaller codebooks overall.

And while we’re speaking about quantisation results, here’s the first frame of waterfall sample encoded in different modes:

median cut

fast

full ELBG

As you can see, median cut produces not so good images but maybe those artefacts will make people think about the original Cinepak more. Fast mode is much nicer but it still has some artefacts (just look at the left edge of the waterfall) but if you don’t pay too much attention it’s not much worse than full ELBG.

Are there ways to improve it even further? Definitely. For starters, the original encoder exploits the previous codebook to create a new one while my encoder always generates a new codebook from scratch (in theory I can skip median cut stage for inter strips but I suspect that ELBG will work much longer in this case). The second way is to fasten up the ELBG itself. From what I could see it spends most of the time determining to which cluster each of the points belong. By having some smarter structure (something like k-d tree and some caching to skip recalculating certain clusters altogether) it should be possible to speed it up in several times. Unfortunately in this case I value clarity more so I’ll leave it as is.

P.S. I may also try to see how using thresholds and block variance to decide its coding mode affects the speed and quality (as in this case we first decide how to code blocks and then form codebooks for them instead of forming codebooks first and then deciding which mode suits the current block better; and in this case we’ll have smaller sets to make codebooks from too). But I may do something different instead. Or nothing at all.

NihAV experiments: multi-threaded decoder

Thursday, June 1st, 2023

In my efforts to have an independent player (that relies on third-party libraries merely for doing input and output while the demuxing and decoding is done purely by NihAV) I had to explore the way of writing a multi-threaded H.264 decoder. And while it’s not working perfectly it’s a good proof of a concept. Here I’ll describe how I hacked my existing decoder to support multi-threading.
(more…)

rv4enc: probably done

Saturday, May 13th, 2023

In one of the previous posts I said that this encoder will likely keep me occupied for a long time. Considering how bad was that estimation I must be a programmer.

Anyway, there were four main issues to be resolved: compatibility with the reference player, B-frame selection and performing motion estimation for interpolated macroblocks in them, and rate control.

I gave up on the compatibility. The reference player is unwieldy and I’d rather not run it at all let alone debug it. Nowadays the majority of players use my decoder anyway and the produced videos seem to play fine with it.

The question of motion vector search for interpolated macroblocks was discusses in the previous post. The solution is there but it slows down encoding by several times. As a side note, by omitting intra 4×4 mode in B-frames I’ve got a significant speed-up (ten to thirty percent depending on quantiser) so I decided to keep it this way by default.

The last two issues were resolved with the same trick: estimating frame complexity. This is done in a relatively simple way: calculate SATD (sum of absolute values of Hadamard-transformed block) of the differences between current and some previous frame with motion compensation applied. For speed reasons you can downsample those frames and use a simpler motion search (like with pixel-precision only). And then you can use calculated value to estimate some frame properties.

For example, if the difference between frames 0 and 1 is about the same as the difference between frames 1 and 2 then frame 1 should probably be coded as B-frame. I’ve implemented it as a simple dynamic frame selector that allows one B-frame between reference frames (it can be extended to allow several B-frames but I didn’t bother) and it improved coding compared to the fixed frame order.

Additionally there seems to be a correlation between frame complexity and output frame size (also depending on the quantiser of course). So I reworked rate control system to rely on those factors to select the quantiser for I- and P-frames (adjusting them if the predicted and the actual sizes differ too much). B-frames simply use P-frame quantiser plus constant offset. The system seems to work rather well except that it tends to assign too high quantisers for some frames, resulting in rather crisp I-frame followed by more and more blurry frames.

I suppose I’ll play with it for a week or two, hopefully improving it a bit, and then I shall commit it and move to something else.

P.S. the main goal of NihAV is to provide me with a playground for learning and testing new ideas. If it becomes useful beside that, that’s a bonus (for example, I’m mostly using nihav-sndplay to play audio nowadays). So RealVideo 4 encoder has served its purpose by allowing me to play more with various concepts related to B-frames and rate control (plus there were some other tricks). Even if its output makes RealPlayer hang, even if it’s slow—that does not matter much as I’m not going to use it myself and nobody else is going to use it either (VP6 encoder had some initial burst of interest from some people but none afterwards, and nobody cares about RV4 from the start).

Now the challenge is to find myself an interesting task, because most of the tasks I can think about involve improving some encoder or decoder or—shudder—writing a MOV/MP4 muxer. Oh well, I hope I’ll come with something regardless.