Archive for October, 2015

NihAV — Processing Graph Notes

Saturday, October 10th, 2015

I’m giving only a short overview for now, more to come later.

Basically, you have NAGraph that connects different workers (processing units with queues).
Lock-free NAQueue should accept only objects of certain type (side note: introduce libnaarch/naatomics.h).
Also those objects should have a common base (NAGraphObject) that extends NAClass by adding side data and signaling subtype (NAPacket, NARawData or NAFrame for now).
Multiple object types with a common base allow to have the same processing interface (after all, both encoders and decoders simply take some input data and output something else).
Elementary stream from demuxers can be either fed to parser filter that will produce proper packets or it will be directly accepted by decoder or muxer.
Later I should make those parser filters autoinserted too.
Uniform interface should allow easier integration of third-party components even in binary form (if there’s somebody willing to use not yet existing library like this).


Zonal partitioning of the graph (inputs, processing, outputs) maybe should include generic filters (e.g. de/noise) too, and maybe set a flag on NAFrame if it can be modified or should be kept intact.
Errors should be caught right at graph processing stage (do that with callbacks and have fun).
Roughly, this should be the architecture for NihAV till the end of its days since it should be future proof.

Of course, all things described here should be implemented too eventually (sigh).

Rants on Data Compression

Friday, October 9th, 2015

… When I was a young piglet I liked to read the rather famous paper by Bell, Cleary and Witten discussing general data compression and PPM. The best phrase there was that the progress in data compression is mostly defined by larger amounts of RAM available. I still believe those words to be true and below I present my thoughts on current state of data compression. Probably it’s trivial, well-known, obvious or wrong to anybody knowing a bit about data compression but well, it’s my blog and my discarded thoughts dumpster.

General data compression

Let’s start from the very end — entropy coding. There are two approaches: coding into integer amount of bits or coding as close to Shannon’s entropy limit as possible. For both we have been having optimal coding methods for about half a century (Huffman coding — 1952, arithmetic coding — mid-1970s). You cannot improve compression ratio here, so the following schemes are mostly tradeoffs sacrificing a bit of compression for speed gains (especially in form of (pseudo-)arithmetic coders operating only on binary). The only outstanding thing is so-called Asymmetric Numeral Systems but I suspect they are isomorphic to traditional entropy coders.

Now about let’s look at what feeds data to entropy coders. There are two main approaches (often combined): context modeling (probably the real foundation for current highest compression methods — PPM — was proposed in mid-1980s) and LZ77 (guess the year yourselves). Are there improvements in this area? Yes! The principle is simple — the better you can predict input the better you can code it. So if you combine different methods to better handle your data you can get some gains.

And yet the main compression gain here lies in proper preprocessing. From table or executable code preprocessing (table data usually differs only a bit between entries and for executables you can get some gains if you replace jump/call addresses with absolute values) to Burrows–Wheeler transform plus move-to-front plus RLE if needed etc.

Audio compression

You have four main targets here: general lossy compression, speech compression, lossless fast compression and lossless crazy compression.

General lossy compression follows the scheme established in 1990s or earlier: transform to frequency domain, grouping frequencies and coding frequency bands. Most of the methods are quite old and progress is defined mostly by how much RAM and CPU users are willing to sacrifice on it. For example, Celt (main part of Opus; the other part, SILK, is an ordinary speech codec) is not that much different in design from G.722.1 from late 1990s.

Speech coding follows canons from 1980s too — performing LPC, coding filter coefficients and other information enhancing signal reconstruction (pulse position, pitch tilt etc.).

Lossless fast compression (aka for normal usage) follows the suit too — you have LPC or some adaptive filters used for prediction plus residue coding (usually with Golomb/Rice codes from 1960s-1970s, BTW the original Golomb paper is AWESOME, they don’t write papers like that nowadays).

Lossless crazy compression (aka spend hours compressing it and as much for decompressing) employ the same suit but they have longer filters and usually even several filters of different size applied each after another plus better residue coding schemes.

Image compression

Here you have more variety of coding methods but most of them are very old (just look when Haar wavelet was proposed). Especially funny is that JPEG is still holding strong despite being more than twenty years old. I still remember so-called fourth generation image compression (separating image into region borders and textures to fill them and coding those), it didn’t lift off yet despite being introduced in late 1980s or so.

The only interesting development happens in lossless image compression but neither 2-D LZ77 (WebP) nor context modeling (FLIF) are particularly new ideas.

Video compression

Modern codecs are all so similar and they are usually ripoffs of H.26x (there are two exceptions — Thor, which is not a ripoff just because it was designed with openly acknowledging that some parts are taken from H.265, and Daala, which is more original and it’s discussed below).

So nowadays you have a very limited subset of ideas that were present in video codecs from 1990s — it’s boring macroblocks (now with quadtree partitioning instead of fixed size), motion compensation (now you have more reference frames to choose from though) and binary entropy coder (except for Thor, it went the way of RealVideo 3/4 with context-adaptive VLCs). Even the trend of adding special coding tools for special content doesn’t look that original (if you remember countless screen codecs and MPEG-4 Audio, *barf*).

The only exception for now is Daala that uses more original ideas but I fear it will end the same boring codec because it is not crazy enough to make a breakthrough. I believe it should do more crazy preprocessing at least and maybe better modeling, e.g. taking more than nearest neighbours into account (maybe even use something PPM-like for element coding and not just probabilities mixing). Look at JBIG for inspiration maybe 😉

Conclusions

Don’t expect miracles in data compression to happen anytime soon but couple of percent improvements for specialised fields at least in a decade is possible and even expected.

FAQ

Saturday, October 3rd, 2015

Since I’ve been asked the same questions over and over again I’ve decided to make a short (for now) FAQ page.

  • How many years does it take to get a citizenship in Germany? 7-8 years.
  • How long have you been living in Germany? Since Spring 2010, do the math yourself.
  • So you’ll get your German citizenship in a couple of years, right? Maybe. It’s the same kind of maybe as in ‘Berlin-Brandenburg airport will be open in a couple of years.’ And it does not depend on me much.
  • Can you help me with ProRes issue … I can but I have no desire nor obligations. All Trocadero I got writing an encoder is gone long time ago and I don’t participate in projects that offer any ProRes support, inquire there.
  • Can you look at this codec … I can but no promises — I rarely have a desire to do anything these days.
  • Is NihAV real? More or less, it still lacks a lot of design and code but there are some bits implemented already. Design is described in this blog when it appears, code is developed as who-cares-source.
  • Why do you blame lu_zero? Oh, there are so many reasons for that and new ones keep appearing almost every day. Mostly it’s for the things he was supposed to do but still hasn’t done (and unlikely to do in foreseeable future): AVScale design and implementation, writing blog posts on certain topics (often I end writing them, which is yet another reason to blame him), not doing much about ASF or RealMedia demuxers and related delayed work, for personal stuff (like preventing me trying Torino trams and underground), for missing technical stuff in a wiki. Oh, and for being at least two different persons. There’s more that I can’t remember right now.
  • When will you visit Pelh?imov? Dunno, maybe when I have more than three free days.