Looking at PACo

August 23rd, 2021

I was asked to look at this format used at least in Iron Helix game and it’s a somewhat interesting format.

It turned out to come from Macintosh. There are two small signs hinting on it: big-endian numbers inside the file and the fact it uses default QuickTime palette.

The container is simple but functional: there’s a header containing frame sizes among other thing, frame consisting of several records (usually it’s just video data and frame data end marker; the first frame has initalisation data chunk as well). Frames can have one of two compression methods and coded area size and offset.

Compression is just a slightly advanced RLE: codes 0x010x7F mean copying data, codes 0x800xFD are used to signal runs, code 0x00 is used to code long operations, code 0xFE is used to signal skips, code 0xFF is used for either runs of pairs or quads of pixels (depending on the run length). Each line is coded independently (i.e. runs or copies can’t go past the current line end). So what’s the tricky part there?

That’s compression method 1 and it works quite well. Compression method 2 is essentially the same but it codes lines in interlaced manner for which I haven’t managed to get a good picture in all cases yet (it seems to code more lines than declared sometimes and interlacing seems to be dependent on both decoded are position and height). But hopefully it won’t take long and I can document it in The Wiki.

Some words on QT Animation (SMC) codec

August 10th, 2021

A recent question about buggy SMC decoding led me deep into QuickTime specification to look at the codec missing opcode. And there are some noteworthy things here as well.

Back in the day there was the multimedia player for Unix called XAnim. Its last release was in 1999—before other opensource multimedia player projects have started! It was both feature-rich (e.g. it could step frames forward and backwards, something that not all current media players can do) and had an excellent codec support for the time.

Somehow its author reverse engineered (long before the era of decompilers too) a lot of codecs and somehow managed to obtain the sources for e.g. Indeo and while he could not provide them, he offered them for a wide variety of architectures—Alpha, MIPS, Sparc, PowerPC, x86. It was a treasure trove for formats and lots of the decoders were ported to other projects (even I did that for one or two codecs) and binary codecs were a great help in reverse-engineering efforts as well.

Now to SMC itself. Formally it’s QuickTime Animation codec but people call it after its FOURCC which is “smc “, probably after the author’s initials.

Opensource SMC decoders come from the same source (I based mine on the description in The Wiki but you can guess what that description is based on; and yes, back in the day e.g. MPlayer and Xine had their own decoders for various codecs before relying on libavcodec for everything). After looking at the binary specification I can say it looks exactly like it was reverse engineered from it directly (it has the same logic and data types but lacks sensible names). Anyway, the thing is that it does not handle opcode 0xF0 and I finally had an occasion to look at it.

I took QuickTime 6.3 binary specification for Windows (somehow the decoder ended in QuickTimeInternetExtras.qtx) and looked inside. It turns out that there are several decoding functions there (for different output formats) but they all do the same: handle 0xF0 opcode in exactly the same way as 0xE0 opcode (raw blocks), there are no differences there whatsoever.

That’s one mystery less, even if the answer is a bit disappointing. At least I could reminisce about good old times hardly anybody else remembers.

Playing with trellis and encoding

August 8th, 2021

I said before I want to play with encoding algorithms within NihAV and here’s another step (a previous major step was vector quantisation and a simple Cinepak encoder using it). Now it’s time for trellis search for encoding.

The idea by itself is very simple: you want to encode a sequence optimally (or decode transmitted sequence with distortions), so you represent your data as a set of possible states for each sample and search a path from one state to another with the minimum error. Since each state of the sample is connected with all states of the previous samples, its graph looks like a trellis:

The search itself is performed by selecting for each state a transition from a previous state that gives minimal error, then selecting a state with the least error for the last sample and tracing back the path that lead to it from the beginning. You just need to store the pointer to the previous state, error value and whatever decoder state you require.

I’ve chosen IMA ADPCM encoder as a test playground since it’s simple but useful. The idea of the format is very simple: you have a state consisting of current sample value and step size used as a multiplier for the transmitted 4-bit difference value; you reconstruct the difference, add it to the previous stored value, and correct step size (small delta—decrease step, large delta—increase step). You have 16 possible states for each sample which makes the search take not so long time.

There’s another tricky question of selecting initial step size (it will adapt to the samples but you need to start with something). I select it to be close to the difference between first and second samples and actually abuse first state to store not the index of the previous state but rather a step index. This way I start with (ideally) 16 different step sizes around the current one and can use the one that gives slightly lower error in the end.

And another fun fact: this way I can use just the code for decompression of single ADPCM sample and I don’t require actual code for compression—it traverses through all possible compressed codes already.

I hope this demonstrates that it’s an easy method that improves quality significantly (I have not conducted proper testing but a from a quick test it reduced mean squared error for me by 10-20%).

It should also come in handy for video compression but unfortunately rate distortion optimisation does not look that easy…

About upcoming AV2…

August 6th, 2021

So today I’ve seen an article titled AV2 Video Codec — Early Performance Evaluation of the Research which of course has drawn my attention.

Fun things are that it is a sponsored article and that it’s written by three engineers from ViCueSoft. This is strange, but so far it still looks more promising than the original AV1 feature review article with over 20 authors and too much marketing in it (my review of it is here; and to be fair it was followed by more serious paper with less authors but this one exists as well). Anyway, let’s see what is presented here.

I don’t care about the performance much so I just quote the phrase from the conclusion: “…rough approximation shows only 1.2x times encoding complexity increase and 1.4x time decoding”. I find the increase in decoding complexity being larger than the increase of encoding complexity a bit strange, normally you’d expect encoding difficulty rising faster because of the nature of the coding approach in modern codecs (normally an encoder needs to search for the best combination of encoding tools and their parameters and then apply the same steps as decoder does in order to have a coded frame in the same state as decoder would have it). Let’s look at the features then, it’s the most interesting part to me anyway.

  • distant weighted compound mode and dual interpolation filter are removed;
  • semi-decoupled partitioning is introduced—this feature allows splitting luma and chroma blocks and code their contents independently under certain level. The paper also says there’s Dual Tree feature in VVC that does the same;
  • quantiser step overhaul—instead of six tables in AV1 now you have just one simple formula for all quantiser step;
  • extending motion sample selection to work with compound blocks as well;
  • more partitioning modes to be more like HEVC;
  • multiple reference line selection for intra prediction—allows you to select not just neighbouring row/column for directional intra prediction. The same tool exists in VVC. And it also reminds me of X8 frames in WMV2/WMV9, that is the first case of intra prediction using more than one line known to me;
  • offset-based intra prediction refinement—adding some offset to the top/left intra predicted edge of the block to make it even smoother (the offset is calculated from the neighbouring blocks as well);
  • intra secondary transform—this tool tries to improve compression by applying a special secondary transform to the low-frequency coefficients. VVC has low-frequency non separable transform doing the same;
  • simplifications in intra mode signalling;
  • some improvements in motion prediction coding;
  • cross-component sample offset—another chroma-from-luma tool: for the whole CTU between deblocking and CDEF stages a DC offset is calculated from the luma values and applied to chroma values.

Essentially there are three kinds of improvements: simplification or generalisation of the existing feature (including complete removal of it—I approve either), picking the tool used by VVC/H.266 (that approach works but lacks originality) and an occasional improvement of an existing tool (too few and not too original). Of course nobody knows when AV2 will be declared finished and some things will surely have changed by then, but I don’t expect radical changes.

Once I said that I’ll review H.266 when AV2 is released but these guys has essentially done my work instead of me. Thanks!

Why codecs are designed like this and why they are not very interchangeable

August 2nd, 2021

Sometimes I have to explain the role of various codecs and why it’s pointless in most cases to adapt compression tricks from image codecs to audio codecs (and vice versa) and even from lossy to lossless codecs in the same content. If you understand that already then you’ll find no new information here.

Yours truly
Captain Obvious
Read the rest of this entry »

Looking at Tsunami games

August 1st, 2021

You may remember Tsunami Media as a company founded by ex-Sierra people that released a couple of games and ceased its existence.

MobyGames lists the following titles (characteristic is mine):

  • Ringworld: Revenge of the Patriarch—an adventure game based on Ringworld novel by Larry Niven. ScummVM supports its but I played it long before that.
  • Wacky Funsters! The Geekwad’s Guide to Gaming—a collection of arcade games with wacky design (I’ve only played its sequel though).
  • Protostar: War on the Frontier—a reportedly good strategy game inspired by the same source as Star Control II so they share a lot of game design (and yes, they have a common ancestor so it’s not a rip-off. I’ve never played it myself but what I saw looks interesting.
  • Blue ForcePolice Quest in anything but name. ScummVM supports it but again, I played it long before that.
  • The Geekwad: Games of the Galaxy—another collection of standard arcade games but with wacky design. I especially liked quizzes there.
  • Flash Traffic: City of Angels—one of the first interactive movies. The Mike has blogged about it.
  • Return to Ringworld—a sequel to the Ringworld obviously. ScummVM supports it (so I can re-play it and check whether that empty platform where you seek for the details is really that horrible).
  • Man Enough—FMV dating game.
  • Silent Steel—yet another interactive movie.
  • Free Enterprise—some business simulator.

As you can see, some of the adventure games are supported by ScummVM already but FMV-based ones are not which makes me wonder why.

Man Enough uses the same engine as the previous games (I checked personally. FMV sections there turned out to be animations in the same format as in Ringworld II (I hacked a quick RLB extractor and animation decoder to check that). Side note: whoever added a support for the engine was tired (for a very good reason given below) and hadn’t recognized that it uses LZW compression (so the decompressor is still slightly beautified REd code). Also in this game you have RLBs with elements following each other and those aligned to 16-byte boundaries. So maybe it’d be better just to check if you have first entry right after the library header or not (they all have TMI- header so you can’t mistake it for anything else).

Flash Traffic is special since while it uses the same engine it has all resources stored separately instead of a single library archiving them all. Additionally while TMI format they use is the same, the compression is different. Here you have LZ77-like method which can copy verbatim, copy 32-bit words from already decoded area, fill region with repeating pattern of two bytes, or simply zero region. I have a working implementation for it so I can unpack various resources from the game just fine (but beside BFI or MPEG files that have been supported since long time why should I bother?).

Silent Steel is their newer FMV game that’s mostly just one large MPEG video file and some logic around it (that’s why there was an interactive DVD re-release of the game later).

And now here’s the most important reason why Flash Traffic and Man Enough are not supported up to this day: Tsunami Media hardcoded game logic into the binaries, so while for SCI or SCUMM games you can write a virtual machine and interpret original bytecodes, here you need to decompile game logic from the DOS executable (it’s not an easy task even today) and re-implement it yourself. That’s why tsage engine in ScummVM is full of ${game_name}/${game_name}_scenesN.cpp files for each of the three games it supports. I’m pretty sure the developers won’t refuse somebody else’s contribution for the other tsage games support so you’re welcome to try.

Final news about H4M and Nightlong FMV

July 29th, 2021

In my previous post I said I was looking at them and while some details were unknown, I got some understanding.

But it turns out The Multimedia Mike despite what he claims still hasn’t lost the knack for REing codecs—first you need to look if somebody else has done it already. And indeed, somebody else has spent two years on reverse engineering the codec. So all that was left to me was to improve its description it (which I did).

The video compression there is quite curious. Huffman-coded data chunks are not that rare (Truemotion 2 had them, for example). Huffman-coded data chunks where one chunk has tree description coded in the beginning that is used for several other chunks is more interesting. But the transforms was the juiciest part there. It turns out you have several modes of operation: fill block, raw block, smoothed DC block (that one uses DCs from neighbour blocks on all sides to create a smooth transition between them all) and the actual Adaptive Orthogonal Transform that represents block as 1-5 patterns (scaled), selected from single matrix. It reminds me a bit about SVQ1 where you had a block constructed from 2D patterns of varying size. But even more it reminds me of Matching Pursuit experimental video codec from Berkley University that also performed partial transform and selected only some bases from a large set (does anybody remember this codec at all?).

Nightlong FMV is a bit different. The format is simple but FMV2 has packed frames and there was no code for handling it in 68k version of the binary. Nevertheless I could unpack it simply by dumping frame data and extracting it with unar (PowerPacker is a strange compression format BTW, it starts reading data from the end of buffer and outputs data starting from the end as well). The unpacked data turned out to use the same format except that the frame size is now full 640×360 instead of 320×180 trying to look like twice as large. I documented it as well.

P.S. I still have some stuff to look at but more suggestions are always welcome.

REing codecs from exotic platforms with Ghidra

July 28th, 2021

In a recent post I asked for some stuff to RE and I got some (but more pointers are welcome).

Among those two there was H4M format for Gamecube (kudos to The Mike for providing me with the samples) and FMV format (yes, that’s the extension as well) for Nightlong game on Amiga. This finally gave me a chance to look at how Ghidra 10.0.1 fares with such exotic executables.

First I had to install special loaders for the executable formats (and hack their manifests to claim it’s for 10.0.1 and not 10.0) and also ISA support for special PPC extensions for Gamecube PowerPC CPU. After that it was just loading binaries and analysing them. And there’s nothing else to write about it, it just works.

Now some words about the formats. I’ve not fully REd them (and I don’t know how much time it’ll take either, it all depends on weather and mood) but there are some things I can say about them already.

FMV is a fixed 320×180 paletted video with 4×4 tiles. The compression is very simple: you can either have a raw 4×4 tile, sub-divided 4×4 tile where you can have only some pixels in 2×2 block updated, or skip command (for a single block or for rows). There is also fmv2 where each frame seems to be further (or instead?) compressed with PowerPacker (at least that’s how I interpret frame data starting with PP20).

H4M seems to be a rather advanced format. From what I see it has small frames combined in large blocks with each block having a previous block size stored as well (for seeking backwards like FLV, I suppose). Audio codec looks like a typical speech codec on console. Video codec looks interesting: it has B-frames for starters and it’s definitely not a H.264 rip-off. From what I could gather, it codes several data sources in the frame (i.e. each kind of data is stored separately and not interleaved with the rest of data), some of those data sources contain Huffman-packed data, others are used just to read plain bits and bytes (yes, I saw only functions that read single bit or eight bits, nothing else). Then the data from those sources is used to code coefficients and zero runs for the whole planes (I’m not sure yet if it has blocks at all).

I’ve looked just at I-frame decoding, no reconstruction or P/B-frame decoding yet, but what I’ve seen already makes it a very interesting codec to examine. I was asked to look at it many years ago but I declined that request since I had little experience with both PowerPC assembly and the executable format Gamecube uses (the only time when I looked at PowerPC assembly was Fruity Intermediate Codec and even then IIRC I finished REing it only after x86 binary became available). Now thanks to Ghidra with its decompiler and enthusiasts providing support for such fringe formats I can look at such formats too.

Rust needs proper stand-alone assembler support

July 27th, 2021

Back when I gave my arguments I why don’t consider Rust a mature language, one of those arguments was that is lacks proper assembler support and systems programming language requires it since some of the tasks you need to perform (including optimisation) require as low level access as you can get. Here I would like to argue why asm!{} may be enough for most cases it’s definitely not for mine.
Read the rest of this entry »

Looking for formats to look at

July 21st, 2021

As I mentioned in one of the previous posts, I’ve achieved all the goals for NihAV that I initially set except for trying to write a proper encoder (and no, world domination has never been on my list). Unfortunately this will be not so easy thing to do and I’d like to have a distraction time from time.

Usually I distracted myself with reverse-engineering some format and maybe implementing decoding support for it in NihAV but recently I realized that I ran out of low-hanging fruit. There should be interesting codecs and game containers out there still waiting for their chance but I could not remember anything. I even went through ScummVM code and documented video formats from there in The Wiki, that’s how bored I was.

So I’d be grateful if somebody can point me out to a thing to RE. Last time when Peter drew my attention to VGM/XVD it turned out to be a very fulfilling experience.