Looking at Digital Pictures video format(s)

June 26th, 2023

Since OSQ format requires obtaining a copy of some expensive software of unknown old version, I’ll leave it to somebody else. Meanwhile I’ve looked closer at the AVC format mentioned in the previous post and its relatives.

For those of you who don’t recognize the name immediately, Digital Pictures is the company responsible for some FMV-based action games, including the infamous Night Trap. As I rediscovered previously, the AVC files they use are really SGA files with compression method 0x81 but what about the other formats?

About half of the games I could look at contain an archive occupying most of the CD space, inside that archive are the same AVC files. Other half of the games usually has one or two AVC files with a company logo and one megamovie in various formats. And after some research it turned out to be the same 0x81 compression format but with audio data and varying headers.

And since nobody bothered to document it for The Wiki, I’ll explain it here.
Read the rest of this entry »

Looking at even more game formats

June 23rd, 2023

Since I have nothing better to do as usual, I decided to look at some game formats.

For instance, there’s a game called The Fuel Run promoting a product from a Swiss company supporting russian war crimes. This game has animations in VDO format. VDO file header starts with the string “Artform” and it employs RLE compression, each line of the frame prefixed with its size. What’s funny is that its RLE uses only bottom 6 bits for the run length and top bit is completely unused. For obvious reasons the format is not worthy documenting further.

Or there’s a game called Double Switch, this one has AVC videos. It uses RGB555 palette and 8×8 tiles that depending on opcode may be either coded with one of 25 predefined patterns and 1-7 colours or split into tiles of the smaller size. And only afterwards I decided to look into The Wiki and it seems to match SGA format description (except that this particular format variant is not documented). I don’t know if I should bother writing a decoder for it but with the lack of PC codecs to RE I might try my hoof at console ones instead.

What optimisation possibilities I miss in Rust

June 23rd, 2023

Since a certain friend of mine keeps asking what features I need in Rust and then forgets the answer, here I decided to write it all down. Hopefully it will become outdated sooner than later.

And I’d like to start with some explanations and conditions. I develop a certain multimedia project so I have certain common flows (e.g. processing 16×16 macroblocks in a frame) and I’d like to be able to optimise for them. Also I do not like to use the nightly/unstable version of Rust (as those unstable features may take an extremely long time to hit stable and they change in the process, as it happened to asm!{} support to give one example). And finally I do not accept the answer “there’s a crate X for that”—out of design considerations I prefer to avoid external dependencies (short explanation: those get out of control fast; my encoder and player projects depend only on my own crates for doing everything but the player additionally pulls sdl2 dependency and suddenly it’s 33 crates instead of 19; IIRC with a newer version of sdl2 crate the total number gets to fifty).

Anyway, here are the features I miss with some explanations why that should be relevant probably not just to me.
Read the rest of this entry »

Looking at yet another weird game format

June 21st, 2023

So I decided to look at yet another game at archive.org with an FMV tag. This time it was The Lawnmower Man. It had NXL format that should be a rather unremarkable container with raw video and audio except for one fact: it is EGA full-motion video (320×200, 16 colours, custom palette per each frame). It is the first time I see video being stored in planar form (i.e. for each line first the lowest bits of each colour index value are stored, then bits 1, bits 2 and finally bits 3). Now consider that the format is big-endian (even if DOS version came out reportedly earlier than Mac or SEGA CD versions) and it feels positively weird.

Some of the details I bothered to document are at the usual place (somebody should keep The Wiki alive beside Mike after all).

Conquering yet another game format

June 18th, 2023

Sometimes I watch reviews of various old video games (usually of adventure type but not necessarily). This time it was Conquest Earth RTS from 1997 and the reviewer said magic words: “I was not able to extract or play videos with anything”. Okay, you got my attention.

I could find some .rpl files there using Escape 124 codec (yes, Eidos was the publisher, how could you tell?), some standalone .flh files and some flicN.wad archives. Despite the name, the archives turned out to have a footer, i.e. all metadata is stored at the end for a change. The video files seem to be some .flc but mostly .flh.

As one could reasonably expect, it turned out to be yet another hack of the old venerable FLIC format, now with RNCv2 compression. I did not look further to find out whether it’s raw frame or RLE-compressed data there but it does not matter much as the main features are already discovered.

There are many various FLIC extensions, like high(er)-bitdepth video, custom RLE coding schemes and even audio support. And that’s not counting the fun things like FLIC-in-AVI or FLIC-in-AVI. But I leave documenting all those format variations to The Multimedia Mike, gaming and documenting FLIC are his passions.

One last experiment with Cinepak encoder

June 17th, 2023

I’ve remembered that back in the day there was an encoder for RoQ format (the format that uses a codebook with 2×2 YUV vectors, what a coincidence!) called Switchblade and it was using NeuQuant before it was integrated into FFmpeg where it started to use ELBG. So I decided to give it a try.

If you have forgotten, NeuQuant is an application of Kohonen neural network to the task of generating palette for an image. I’ve implemented that kind already so I tried my hoof at adapting it for a larger vector size. Good thing: it works and it’s reasonably fast (2-3 times slower than median cut, faster than partitioned ELBG—and that’s the code that uses doubles for the majority of its calculations). Bad thing: the result quality is mediocre. The results obviously can be improved by adjusting various factors (wait, am I talking about neural network or string theory?) and changing the pseudo-random order in which the candidates are sampled but I don’t feel enthusiastic about tweaking all those parameters and see which ones work good for the widest selection of video sequences.

So I’m drawing a line here. It was a quick and failed experiment, I should find something better to do.

Yet another MOV quirk

June 15th, 2023

Since I had nothing better to do I was browsing FMV games at archive.org and in one of them I found rather peculiar sample: avconv has wrong palette for the first half of it and nihav-tool has wrong palette in the second half of the clip. And I thought that MOV is not supposed to have palette changes at all.

It turned out they used a multiple sample descriptors trick: it’s possible to provide several codec descriptions for one track and use one or another for different frames. That file has two descriptors for the video track with different palettes. Mystery solved.

And it also solved another mystery with a different file from that game where some frames are not decoded properly. It turned out that it also has two sample descriptors for the video track: one is A**le Graphics and another one is Cinepak.

Back in the day I ranted that MOV is too flexible and this proves once again how true that is. Good thing I don’t have to care about supporting such files properly.

NihAV: another step to being self-sufficient

June 13th, 2023

I’ve mentioned previously that I played with my H.264 decoder trying to make it multi-threaded. Now I went a bit further and plugged it into my video player. So now instead of hopelessly lagging on 720p video it can play it in real time just fine—so after improving my player even further (and enabling assembly optimisations when Rust compiler is good enough for that) I can use it to play most of the videos I care about without resorting to the external decoders or players. And in theory using it more will lead to fixing and polishing it more thus forming a stable loop.

Anyway, the code is not public yet as I hacked this new encoder in a separate crate and I still need to merge it back and clean up a bit, but I’d like to describe the interfaces and my reasons behind them.

So, multi-threaded decoder has a separate interface (for obvious reasons). I thought about writing a wrapper for single-threaded decoders to behave like multi-threaded ones but decided against it (at least for now). NADecoderMT has the following methods:

  • init()—initialises the decoder. One of the parameters is number of threads to use. IMO it’s the caller that decides how many threads it can spare as the decoder does not know what will be done in parallel (maybe there’s another multi-threaded decoder or two are running);
  • can_take_input()—queries if the decoder is ready to queue the next frame for decoding. Of course you can call queue_pkt() and check if it accepted the input but it may not always be desired (e.g. if we need to retrieve an input packet and then hold it waiting until the decoder is ready to accept it);
  • queue_pkt()—tries to queue the next frame for decoding;
  • has_output()—checks if the decoder has produced some frames for the output. Since get_frame() is waiting for a frame to be decoded this function is necessary unless you want to block the thread calling the decoder;
  • get_frame()—waits until at least one frame is decoded and returns it (or a special error if there are no frames to be decoded);
  • flush()—stops decoding all frames and clears the state (e.g. after seek).

Another peculiarity of this decoder interface is that it operates on pairs of a frame and its sequential number. The reason is simple: you get decoded frames out of order so you need to distinguish them somehow (and in case of a decoding error we need to know which frame caused it).

This also leads to a special frame reorder mechanism for such codecs. I’ve created MTFrameReorderer that requires you to “register” frame for decoding (providing you with an ID that is fed to the decoder along with frame data) and to “unregister” frame on error (that’s one of the places where returned frame ID comes in handy). Unfortunately it’s not possible to create a generic reorderer that would a) work completely codec-agnostic b) not require a whole file (or an indefinitely long sequence of frames) to be buffered before output and c) produce monotone increasing sequence of frames. Considering how H.264 has no real concept of frames and can build a pyramid of referenced frames adding layer by layer (and mind you, some frames may have an error during decoding and thus not present in output). I simply gave up and made a heuristic that checks if we have enough initial frames decoded and outputs some of them if it’s possible. At least it seems to work rather fine on the conformance suite (except for a couple of specially crafter files but oh well).

Maybe in the future I’ll try more multi-threaded decoders but for now even one decoder is enough, especially such practical one. Still, I need to find something more interesting to do.

Further Cinepak experiments

June 5th, 2023

For having nothing better to do I kept experimenting with Cinepak encoder.

I considered implementing some variant of codebook decomposition scheme suggested by Tomas in the comments to the previous post but I’m still not sure if I should bother even if it looks promising. So I tried the old thresholds-based scheme instead.

And what do you know, it speeds things up considerably: my usual test sample gets encoded in 27-35 seconds (depending on thresholds) instead of 44 seconds in the usual mode. But since I don’t know what would be good thresholds I did the opposite and added a refinement mode: after deciding which codebook to use for which block I re-generate codebook using only those blocks that belong to it. Of course it increases processing time, for example that file it takes 75 seconds to encode with refinement—which is still 70% more time but still less than double (for comparison, in full ELBG mode it’s an increase from about 160 seconds to 270 seconds).

So by rough estimate selecting only relevant blocks for codebook generation shaves 20-40% off the encoding time. And splitting data into partitions and generating a codebook by parts made the process about three times faster. I suspect that with a proper approach to clustering vector quantisation can be made two-three times faster but I don’t think I want to experiment with that. I should call it a day and move to something else instead.

Quick experiments with Cinepak encoder vector quantisation

June 3rd, 2023

Out of curiosity I decided to check how partitioning input before creating a codebook affects encoding speed. So I’ve added a mode to Cinepak encoder that partitions vectors by luma variance and creates a part of common codebook just for them. The other two modes are median cut (the simplest one but also with mediocre output) and ELBG (that uses median cut to create the initial codebook—also if it’s not full that means we have all possible entries and do not need to perform ELBG at all).

Here are rough results on encoding several different files (and using different number of strips): median cut worked for 11-14 seconds, ELBG took 110-160 seconds, new mode (I decided to call it fast) takes 43-62 seconds. I think even such approximate numbers speak for themselves. Also there’s an interesting side effect: because of the partitioning it tends to produce smaller codebooks overall.

And while we’re speaking about quantisation results, here’s the first frame of waterfall sample encoded in different modes:

median cut

fast

full ELBG

As you can see, median cut produces not so good images but maybe those artefacts will make people think about the original Cinepak more. Fast mode is much nicer but it still has some artefacts (just look at the left edge of the waterfall) but if you don’t pay too much attention it’s not much worse than full ELBG.

Are there ways to improve it even further? Definitely. For starters, the original encoder exploits the previous codebook to create a new one while my encoder always generates a new codebook from scratch (in theory I can skip median cut stage for inter strips but I suspect that ELBG will work much longer in this case). The second way is to fasten up the ELBG itself. From what I could see it spends most of the time determining to which cluster each of the points belong. By having some smarter structure (something like k-d tree and some caching to skip recalculating certain clusters altogether) it should be possible to speed it up in several times. Unfortunately in this case I value clarity more so I’ll leave it as is.

P.S. I may also try to see how using thresholds and block variance to decide its coding mode affects the speed and quality (as in this case we first decide how to code blocks and then form codebooks for them instead of forming codebooks first and then deciding which mode suits the current block better; and in this case we’ll have smaller sets to make codebooks from too). But I may do something different instead. Or nothing at all.