Bink: ‘lossy’ coefficients reading.

August 29th, 2009

RTMP client seems to work fine, RTMP support in FFserver is not that close, so I work on REing some codec which seems to be rather widespread in games.

OK, now to technical details. ‘Lossless’ coefficient coding is similar but a bit more complicated.

For each 8×8 block there is 7-bit number specifying number of masks to read (mask = part of the coefficient), slightly resembling progressive JPEG coding; coefficient value may be composed from several masks, high bits are decoded first. Decoding continues until all masks are read.

Coding method is not that comprehensible though: there is list of start coefficient and modes, so decoding iterates over this list and performs some action depending on mode. Have I mentioned that aforementioned list may change during operation?

And here’s decoding algorithm (if I got it right):


mask = 1 < < get_bits(3) iterate over already decoded coefficients, if read bit = 1 then add mask to the coefficient iterate over list of modes until end is reached, if (coef,mode)==(0,0) or read bit = 0 then skip current entry:   mode = 0:    set current entry to (cur_coef+4; mode = 1)    for(i=0;i<4;i++, cur_coef++){     if(get_bit()) prepend list with (cur_coef, mode = 3)     else coeffs[cur_coef] = get_bit() ? -mask : mask;    }   mode = 1:    set current entry to (cur_coef; mode = 2)    append (cur_coef+4; mode = 2), (cur_coef+8; mode = 2), (cur_coef+12; mode = 2) to the list   mode = 2:    set current entry to (0; mode = 0)    for(i=0;i<4;i++, cur_coef++){     if(get_bit()) prepend list with (cur_coef, mode = 3)     else coeffs[cur_coef] = get_bit() ? -mask : mask;    }   mode = 3:    coeffs[cur_coef] = get_bit() ? -mask : mask;

Brief notes about Bink

July 13th, 2009

If you play a lot of games (or maybe not that much) andd are interested in watching their FMVs you should hear about Bink sooner or later.

This is rather widespread codec in games and it’s sad we still don’t have an opensource decoder for it.

Here are some facts about it:

  • Container format seems to inherit a lot from Smacker.
  • There are two different audio codecs differing by transform.
  • Bink video is mostly static Huffman coding + vector quantisation or DCT

So, why not reimplement it?
Here are some more details about video:

  1. It employs static Huffman coding – there are 16 predefined trees which are used to decode a lot of data — the only exceptions are block coefficients. Tree definitions include only tree number and how to reorder table of symbols for current data.
  2. Almost all values are coded in bundles for several blocks at once (usually for half of a frame)
  3. 8-bit values may be encoded as independent nibbles or high nibble may have context-dependent encoding when it’s encoded with a tree number equal to the last high nibble (so you need 16+1 trees for that but who cares).

The rest will be available as it goes.

A bit on Interplay MVE 16-bit

April 29th, 2009

For those, who are interested in playing 16-bit MVE files (yes, Mike, I am talking about you) here are some bits of information I’ve gathered at my leisure:

  • you have to skip 16 bytes from block map at the beginning instead of 14 for 8-bit MVE
  • colours are now stored as 15-bit (obvious, isn’t it), and high bit may be set for pattern fill order (8-bit MVE just compared colour values, which still works)
  • for some opcodes pattern fill order was changed a bit (i.e. subblocks scan order)
  • some opcodes meaning was changed completely. Opcode 3 does not requires additional bytes to be read anymore.

I didn’t have a desire to complete it, especially because it’s no fun to debug how motion is stored, so I just hacked existing decoder a bit to decode 16-bit files. Here’s a picture produced by maimed libavcodec/ipvideo.c:

interplay16

In the memory of my ThinkPad

April 29th, 2009

I bought my brand newrefurbished IBM ThinkPad 390 six years ago. While its hardware may be laughable by the current standards – PII-266, 192MB RAM, 4GB HDD – it was my computer where I started developing for FFmpeg. GCC compiling libavcodec/motion_est.c was the reason for adding 128MB to original 64MB of RAM. IIRC, all of codecs development till 2006 GSoC (VC-1 decoder) was done on it.

When I moved to MacMini, it still served me – as a router (it’s hard to see COM port on modern hardware, so modem was connected to TP390, later it was ADSL modem and second PCMCIA network card), as an x86 platform (mostly for running IDA and binary codecs) and for Internet-related stuff (cvs and git server, mail fetching, small web server, downloader and such).

Here’s how it looked for the last years:

i390

Rest in peace.

Now I have Asus EEE 701 working instead of it. Since it’s more compact, I can also fit BeagleBoard on the table next to it.

A bit of new hardware

March 21st, 2009

I’ve wanted to write another useless rant about idiocy in our lives as a governing policy (for example, 1st class railroad cars being worse than 2nd class but more expensive or how “express” is translated into Ukrainian as “????????????” or “???????????”, both meaning “accelerated” or “sped-up”) but I have a bit of more pleasant news.

I’ve spent the rest of GSoC money on BeagleBoard and it took about 15 days to deliver it (which is rather impressive by local standards). So I hope to start hacking on it too (I’m pretty sure it would be good for both FFmpeg and me if I learn ARM assembly and about NEON unit). In my opinion they would really benefit from having built-in network adapter (there’s a place for it on PCB too) though; since this is not Mac, saying that USB should be enough for everything is rather lame.

Notes on AAC quantisation

March 19th, 2009

I should have written this earlier if not for non-FFmpeg work I have to do here. BTW, are some linguists around there that can explain a relation between bureaucratic and textile (“bureaucracy” comes from a sort of cloth used to cover tables, “red tape” is rather obvious, Russian “????????”, “????????” and “??????????” are also related to a process of obtaining thin threads). Ahem.

AAC coding has two computationally costly operations — MDCT and coefficient quantisation. While the former takes more cycles per one call, the latter is usually called several times for each frame, so those times tend to sum up and outweigh MDCT in bad encoders (like mine). From rate distortion theory we know how to determine proper quantizers for AAC – distortion caused by that quantisation multipled by lambda plus number of bits needed to code that band with this quantiser should be minimal for given value of lambda.

How could we achieve this? Well, use one of three approaches:

  1. Assign some fixed quantizers
  2. Use some ad hoc rule to determine quantiser and then refine its value a bit (aka heuristic, since it gives good speed, it is widely used)
  3. Try all possible quantisers by brute force or Viterbi method (optimal but very slow)

With heuristic you have one catch: if your primary guess on quantiser is not good then refining either takes a lot of time or gives you far from optimal result. Trellis-based search is implemented in my decoder and results in around 20x slower than realtime encoding speed (i.e. encoding one second of audio takes 20 seconds of CPU time) on modern CPUs. I’m playing with something heuristical and fast.

Now to quantising itself.

Each coefficient is quantised as out = (int)pow(in / quantiser, 0.75);. Division of floating-point numbers is slow, taking power of a number is even slower. You can convert MDCT coefficients to the power of three fourths (and quantisers are also converted in precomputed table), thus getting rid of power. FAAC also multiplies coefficients so they are always quantised except for taking integer part. My decoder just multiplies possible codebook vectors by that quantiser and compares it with input coefficients leaving them intact. I also had an idea to present MDCT coefficients in base pow(2, 0.25) making it easy to manipulate but someone still has to test it where base conversions won’t eat all of the gain. I have also tried several optimisations like not trying to match coefficients against all codebook vectors using only close enough vectors. More approaches to try.

(I hope these notes will form “How I Wrote the Best Opensource AAC Encoder Around (to Accompany x264)” memoirs :-S )

My proposal on roadmap for FFmpeg

March 17th, 2009

Here’s the thing either Compn, known for his passion to document codecs, or Mike, known for his passion to diagrams, charts and codecs, should have done loooooong time ago.

While the same information may be obtained from Multimedia Wiki, a graphical layout should be more handy for claims like “… include reverse-engineering of all Real video formats” here. I am also aware of list of supported codecs in MPlayer documentation but it’s also boring and not very useful as a reference.

Here’s how I like it — green status for supported codecs, red for unsupported. But from a glance on it you can see what’s missing and what should be added to my beloved video conversion tool.

scheme

Note: I know that we have to enhance FFmpeg in other areas than different formats support (filter system, for example). Patches welcome.

General psychoacoustic <-> coding interaction principles

March 5th, 2009

OK, let’s suppose we have some abstract subband coder. What it does? It performs some transform on input block of data (like MDCT or QMF filterbank) then obtained frequencies are grouped, quantized and coded.

There could be many approaches but usually there are two general principles employed:

  • Some frequencies matter more than another.
  • Energy carried by subbands matters too.

Psychoacoustic model gives us a list of subband weights meaning their importance. Now what encoder could do with them? Quantize input data and code it. There are three approaches:

  1. Perform optimal coding using psychoacoustic data (good but slow)
  2. Do some heuristics to get some quick and dirty approximation (most popular approach)
  3. Ignore psychoacoustics completely (seems to be popular too)

Optimal coding may be done by employing Vitterbi method in one form or another. Heuristics are usually done in that way: give some initial prediction value for quantizer then refine it a bit until result is close enough to desired one.

More on AAC-specific coding later.

AAC encoder and psy model

March 5th, 2009

As you may know, I am working (mostly NOT working though :(, but still remember about it) on AAC encoder. This morning I’ve made simpler psychoacoustic model inspired by FAAC (yes, Dark Shikari, FAAC has some sort of hardcoded psy model) work with my encoder.

I’ll try to use this blog with its original purpose — to formalize my thoughts on subject at hand. I thinks many posts on different aspects of psychoacoustics will follow before more or less suitable encoder will appear. “More or less suitable” means it should be at least a good audio encoding counterpart for x264 (while “fully suitable” means total world domination).

Too bad there’s not enough time (always).

A bit of news

March 3rd, 2009

Looks like I’ve neglected my blog for some time. In order to improve situation a bit here are some assorted pieces of news:

  • FFmpeg release — probably we will have one Really Soon Now. Previous release was before I started developing for FFmpeg.
  • RV3/4 is improving bit by bit. For now most troubles lie in incorrect motion vector predictions for B-frames. I hope to fix it one day (or preferably that someone else will fix it but that’s even more unlikely).
  • SwScaler is slowly moving to be usable under LGPL. Probably it will be only x86 SIMD code that will be left under GPL.
  • PB-frames support was added. So the only one who cares about Intel codecs (Benjamin, son of Lars) can watch i263 with lavc decoder now.
  • I took some time to understand ELBG code in lavc and wrote simple 15bit MS Video1 encoder. Patch pending.
  • I’ve tried to RE BMV (video format employed in Discworld II and Discworld Noir). Discworld II decoder is in ScummVM sources, so I give a shot on DW Noir which is unlikely to be supported by any opensource engine. While figuring out header and container format was piece of cake, finding out audio compression scheme was easy (boy, they do like SWAR!), I have troubles determining which function is used for video decoding.

More news will follow eventually.