NihAV — Processing Graph Notes

October 10th, 2015

I’m giving only a short overview for now, more to come later.

Basically, you have NAGraph that connects different workers (processing units with queues).
Lock-free NAQueue should accept only objects of certain type (side note: introduce libnaarch/naatomics.h).
Also those objects should have a common base (NAGraphObject) that extends NAClass by adding side data and signaling subtype (NAPacket, NARawData or NAFrame for now).
Multiple object types with a common base allow to have the same processing interface (after all, both encoders and decoders simply take some input data and output something else).
Elementary stream from demuxers can be either fed to parser filter that will produce proper packets or it will be directly accepted by decoder or muxer.
Later I should make those parser filters autoinserted too.
Uniform interface should allow easier integration of third-party components even in binary form (if there’s somebody willing to use not yet existing library like this).


Zonal partitioning of the graph (inputs, processing, outputs) maybe should include generic filters (e.g. de/noise) too, and maybe set a flag on NAFrame if it can be modified or should be kept intact.
Errors should be caught right at graph processing stage (do that with callbacks and have fun).
Roughly, this should be the architecture for NihAV till the end of its days since it should be future proof.

Of course, all things described here should be implemented too eventually (sigh).

Rants on Data Compression

October 9th, 2015

… When I was a young piglet I liked to read the rather famous paper by Bell, Cleary and Witten discussing general data compression and PPM. The best phrase there was that the progress in data compression is mostly defined by larger amounts of RAM available. I still believe those words to be true and below I present my thoughts on current state of data compression. Probably it’s trivial, well-known, obvious or wrong to anybody knowing a bit about data compression but well, it’s my blog and my discarded thoughts dumpster.

General data compression

Let’s start from the very end — entropy coding. There are two approaches: coding into integer amount of bits or coding as close to Shannon’s entropy limit as possible. For both we have been having optimal coding methods for about half a century (Huffman coding — 1952, arithmetic coding — mid-1970s). You cannot improve compression ratio here, so the following schemes are mostly tradeoffs sacrificing a bit of compression for speed gains (especially in form of (pseudo-)arithmetic coders operating only on binary). The only outstanding thing is so-called Asymmetric Numeral Systems but I suspect they are isomorphic to traditional entropy coders.

Now about let’s look at what feeds data to entropy coders. There are two main approaches (often combined): context modeling (probably the real foundation for current highest compression methods — PPM — was proposed in mid-1980s) and LZ77 (guess the year yourselves). Are there improvements in this area? Yes! The principle is simple — the better you can predict input the better you can code it. So if you combine different methods to better handle your data you can get some gains.

And yet the main compression gain here lies in proper preprocessing. From table or executable code preprocessing (table data usually differs only a bit between entries and for executables you can get some gains if you replace jump/call addresses with absolute values) to Burrows–Wheeler transform plus move-to-front plus RLE if needed etc.

Audio compression

You have four main targets here: general lossy compression, speech compression, lossless fast compression and lossless crazy compression.

General lossy compression follows the scheme established in 1990s or earlier: transform to frequency domain, grouping frequencies and coding frequency bands. Most of the methods are quite old and progress is defined mostly by how much RAM and CPU users are willing to sacrifice on it. For example, Celt (main part of Opus; the other part, SILK, is an ordinary speech codec) is not that much different in design from G.722.1 from late 1990s.

Speech coding follows canons from 1980s too — performing LPC, coding filter coefficients and other information enhancing signal reconstruction (pulse position, pitch tilt etc.).

Lossless fast compression (aka for normal usage) follows the suit too — you have LPC or some adaptive filters used for prediction plus residue coding (usually with Golomb/Rice codes from 1960s-1970s, BTW the original Golomb paper is AWESOME, they don’t write papers like that nowadays).

Lossless crazy compression (aka spend hours compressing it and as much for decompressing) employ the same suit but they have longer filters and usually even several filters of different size applied each after another plus better residue coding schemes.

Image compression

Here you have more variety of coding methods but most of them are very old (just look when Haar wavelet was proposed). Especially funny is that JPEG is still holding strong despite being more than twenty years old. I still remember so-called fourth generation image compression (separating image into region borders and textures to fill them and coding those), it didn’t lift off yet despite being introduced in late 1980s or so.

The only interesting development happens in lossless image compression but neither 2-D LZ77 (WebP) nor context modeling (FLIF) are particularly new ideas.

Video compression

Modern codecs are all so similar and they are usually ripoffs of H.26x (there are two exceptions — Thor, which is not a ripoff just because it was designed with openly acknowledging that some parts are taken from H.265, and Daala, which is more original and it’s discussed below).

So nowadays you have a very limited subset of ideas that were present in video codecs from 1990s — it’s boring macroblocks (now with quadtree partitioning instead of fixed size), motion compensation (now you have more reference frames to choose from though) and binary entropy coder (except for Thor, it went the way of RealVideo 3/4 with context-adaptive VLCs). Even the trend of adding special coding tools for special content doesn’t look that original (if you remember countless screen codecs and MPEG-4 Audio, *barf*).

The only exception for now is Daala that uses more original ideas but I fear it will end the same boring codec because it is not crazy enough to make a breakthrough. I believe it should do more crazy preprocessing at least and maybe better modeling, e.g. taking more than nearest neighbours into account (maybe even use something PPM-like for element coding and not just probabilities mixing). Look at JBIG for inspiration maybe 😉

Conclusions

Don’t expect miracles in data compression to happen anytime soon but couple of percent improvements for specialised fields at least in a decade is possible and even expected.

FAQ

October 3rd, 2015

Since I’ve been asked the same questions over and over again I’ve decided to make a short (for now) FAQ page.

  • How many years does it take to get a citizenship in Germany? 7-8 years.
  • How long have you been living in Germany? Since Spring 2010, do the math yourself.
  • So you’ll get your German citizenship in a couple of years, right? Maybe. It’s the same kind of maybe as in ‘Berlin-Brandenburg airport will be open in a couple of years.’ And it does not depend on me much.
  • Can you help me with ProRes issue … I can but I have no desire nor obligations. All Trocadero I got writing an encoder is gone long time ago and I don’t participate in projects that offer any ProRes support, inquire there.
  • Can you look at this codec … I can but no promises — I rarely have a desire to do anything these days.
  • Is NihAV real? More or less, it still lacks a lot of design and code but there are some bits implemented already. Design is described in this blog when it appears, code is developed as who-cares-source.
  • Why do you blame lu_zero? Oh, there are so many reasons for that and new ones keep appearing almost every day. Mostly it’s for the things he was supposed to do but still hasn’t done (and unlikely to do in foreseeable future): AVScale design and implementation, writing blog posts on certain topics (often I end writing them, which is yet another reason to blame him), not doing much about ASF or RealMedia demuxers and related delayed work, for personal stuff (like preventing me trying Torino trams and underground), for missing technical stuff in a wiki. Oh, and for being at least two different persons. There’s more that I can’t remember right now.
  • When will you visit Pelh?imov? Dunno, maybe when I have more than three free days.

NihAV: Data Delivery Channels

September 27th, 2015

Disclaimer: this should’ve been written by a certain Italian (and discussed during VDD too) but since he is even lazier than me (and stopped on writing only this), I ended writing this.

One of the main concerns in framework design is how data will be passed from sources to destinations. There are two ways that are mistakenly called synchronous and asynchronous, the main difference being lack of CLK signal how the producing function passes data further — on return when it’s done whatever it wanted to do or by passing control to some external procedure immediately when data is available.

In the worst case of synchronous data passing you have a fixed pipeline: one unit of input goes in, one unit of output is expected to be produced (and maybe one more on final flush), repeat for the next stage. In the worst case of asynchronous data passing you have one thread per stage waiting for another thread to signal that it has some data ready to process. Or in single-threaded mode you simply have nested callbacks, again one per stage, calling the next callback in a chain when it has something to deliver.

Anyway, how should this all apply to NihAV?

In the simplest case we have one chain:

[Demuxer] --> (optional stream splitter) --> Packets --> [Decoder] --> Frames --> [output]

In real world scenario all complexity starts at the stage marked [output]. There you usually feed frames to some filter chain, then to encoder and combine several streams to push all that stuff to some muxer(s).

As for data passing, you can have it in several modes: main processing loop, synchronous processing graph, asynchronous processing graph or an intricate maze of callbacks all alike.

With the main processing loop you have something like this:

while (has_input()) {
   pkt = get_input();
   send_output(pkt);
   while(has_output()) {
     write_output();
   }
}
while(has_output()) {
   write_output();
}

It’s ideal if you like micromanagement and want to save on memory but with many stages it might get rather hairy.

The synchronous processing graph (if you think it’s not called so look at the project name again) adds queues and operates on stages:

while(can_process(graph)) {
   for (i = 0; i < num_stages; i++) {      graph->stages[i]->process(graph->inqueues[i], graph->outqueues[i]);
   }
}

In this situation you have not a single [input] -> [output] connection but rather [input] -> (queue) -> [output], every stage is connected by some queues with another stage and can consume/produce several (or none) elements in any of the queues.

The asynchronous processing graph relies on the output stages pulling data from all previous stages or input stages pushing data to the following stages.

int filter_process(Filter *f, void *data)
{
  do_something(f->context, data);
  f->next->filter(f->next, data);
}

And callback is:

int demux(void (*consumer)(...)){
   while (!eof) {
     read_something();
     if (has_packet)
       consumer(ctx->consumer_id, packet);
   }
   return 0;
}

Callbacks are nice but they move a lot of burden into callback writer and they are hard to get right especially if you do something nontrivial.

So far I’m inclined to have the second approach. E.g. Bink demuxer has packets for several streams stored together in one block — so demuxer will create them all and put into its output queue. Hardware accelerated codec may send several frames for decoding at once — so it will read as many as possible or needed from the input queue and add all decoded frames into output queue when they’re done. Also all shitty tasks like proper synchronisation can be made into filters too.

Of course that will create several new problems especially with resource management but they should be solvable.

For NihAV implementation that means adding NAQueue and several new datatypes (packet, frame, whatever) plus a mechanism in NAClass to recognize them because you would not want a wrong thing going to the wrong pipeline (e.g. decoder expects NAPacket and gets NAFrame instead).

NihAV — I/O

September 3rd, 2015

And now let’s talk about probably the hairiest part of multimedia framework — input-output layer.

Current libavformat design looks too messy to me and thus my design will be different. I know that some Lucas prefer the old way but I’d rather split protocol handling into several layers.

First, there’s base I/O handler used solely for I/O operations. NAIOHandler contains functions for reading data, reading data asynchronously, writing data, seeking and ioctl() for some I/O-specific operations. There will be very few I/O handlers — just file, network (TCP/UDP) and null.

On top of that there’s protocol handler. Protocol handler employs I/O handler to perform I/O operations and provides only reading, writing, seeking and flushing functions. There may be buffered and unbuffered wrappers over I/O handler or something less trivial like HTTP handler.

On top of that there may be a layer or several of other protocol handlers if they need to relate to other protocols (i.e. Some-Streaming-Protocol-over-HTTP).

And on the very top there are I/O functions using protocol handlers for reading and writing data (bytes, X-bit integers, buffers). Those are used by (de)muxers.

That’s the plan, insert usual rant about lack of time, interest and such here.

Random Thoughts on Format Design Process

August 12th, 2015

From my experience a lot of codecs have some wrong things in them and those things are usually introduced during codec creation. As for containers, I’ve expressed my opinion before.

It is very bad when some codec is being developed and then suddenly it’s declared released. You’re left with a pile of code that has somehow evolved into current shape and probably even the author has no idea how it works. Two examples — Snow and Speex. The first one is wavelet-based codec that performed quite well back in DiVX 3/4 days, the other one is a speech codec that also gained some popularity and was even included as one of Flash audio codecs. So the codecs by themselves should not be that bad but there’s only one implementation and no specification. There were several attempts to make Snow developer write a specification for it (for money!) but he always refused. FFV1 is faring somewhat better since it has some rudimentary specification and hopefully standardisation efforts will bring us independent implementations and full specification (yes, I’m an idiot optimist). What would be a proper way to design a codec in my opinion? Create test version, play with it till you achieve good result or release a known beta, write specification, throw away old code and reimplement version 1 from scratch. Repeat for version 2 etc.

I think I’ve complained before that this situation is very common with proprietary codecs too. They have inhouse encoder and decoder implementation with encoder bugs compensated in the decoder. Stupid motion compensation in RealVideo 4 is one of those “features”. Or pre-RTM WMV9 with its block pattern coding though it’s supposed to be beta anyway.

There is even worse case — when codec author decides to embed all development history into decoder maintaining backward compatibility. The worst offender is Monkey Audio with its subtle bitstream changes at every version and having two dozen versions. Another “good” example is HEVC with its ever-changing bitstream format. Different major versions of reference software introduce serious bitstream changes, like HM8 -> HM10 transition remapped all NAL IDs. IIRC superseded version of ITU H.265 was for 4:2:0 subsampling only. Honestly, I shan’t cry if this codec dies because of idiotic licensing terms (and maybe it should really be contained only in FLV). Speaking of HEVC idiocies, VP9 got new features in new profiles including 4:4:0 subsampling. In my opinion one should kill this creeping featurism especially if you don’t have proper profiling/versioning system and even them introduce new features sparingly.

At least there’s still hope for Daala to be developed properly.

NihAV — Guidelines

August 8th, 2015

The weather here remains hellish so there’s little I can do besides suffering from it.

And yet I’ve spent two hours this morning to implement the main part of NihAV — set implementation. The other crucial part is options handling but I’ll postpone it for later since I can write proof of concept code without it.

Here’s a list of NihAV design guidelines I plan to follow:

  • Naming: component functions should be called na_component_create(), na_component_destroy(), na_component_do_something().
  • More generally, prefixing: public functions are prefixed with na_, public headers start with na as well e.g. libnadata/naset.h. The rest is private to the library. Even other NihAV libraries should use only public interfaces, otherwise you get ff_something and avpriv_that called from outside and in result you have MPlayer.
  • NihAV-specific error codes to be used everywhere. Having AVERROR(EWHATEVER) and AVERROR_WHATEVER together is ridiculous. Especially when you have to deal with some error codes being missing from some platform and other being nonportable (what if on nihOS they decided to map ENOMEM to 42? Can you trust error code returned by service run on some remote platform then?).

And here’s how actual set interface looks like:
[sourcecode language=”c”]
#ifndef NA_DATA_SET_H
#define NA_DATA_SET_H

#include “nacommon.h”

struct NASet *na_set_create(NALibrary *lib);
void na_set_destroy(struct NASet *set);
int na_set_add(struct NASet *set, const char *key, void *data);
void* na_set_get(struct NASet *set, const char *key);
void na_set_del(struct NASet *set, const char *key);

struct NASetIterator;

typedef struct NASetEntry {
const char *key;
void *data;
} NASetEntry;

struct NASetIterator* na_set_iterator_get(struct NASet *set);
void na_set_iterator_destroy(struct NASetIterator* it);
int na_set_iterator_next(struct NASetIterator* it, NASetEntry *entry);
void na_set_iterator_reset(struct NASetIterator* it);

#endif
[/sourcecode]

As you can see, it’s nothing special, just basic set (well, it’s really dictionary but NIH terminology applies to this project) manipulation functions plus an iterator to scan through it — quite useful for e.g. showing all options or invoking all registered parsers. Implementation wise it’s simple hash table with djb2 hash.

Predicting NGV^W RMHD

July 19th, 2015

Here’s an occasional prediction how RMHD should look like knowing nothing about it beside press release claims.

  • Base standard — for RV 1 and 2 it was H.263, for RV 3 and 4 it was H.264. Obviously, RMHD and RMUHD should be based on H.265;
  • MV precision — RV 2 had ½-pel MV, RV 3 had ?-pel MV, RV 4 had ¼-pel MV. Obviously, RMHD will have ?-pel MV. Or still ¼-pel because H.265 has not improved MV precision compared to H.264;
  • Bitstream coding — usually that one is kept from previous generation of ripoff codec. Thus, H.265 keeps decoding VLCs further compressed with CABAC, AVS2 (aka HEVS) keeps doing the same with its own coder, VPx using range coder from VP<x-1> and static probabilities Huffman codes. RMHD is supposed to have context-dependent Huffman tables with some bitcoder following it. I.e. determine bitcode from element neighbours and then code each bit of it using some context-adaptive coder (and add some context-dependency somewhere too).
  • Special features — probably none, it will just follow the standard in codec design and the main difference will be in coefficients coding. There’s a chance they’ll build in some scalability feature though.

Let’s live and see what RMHD will really be. It would be nice if none of these predictions is correct.

Springtime for H.265 clones!

July 15th, 2015

Previously I feared there won’t be any H.265 clones beside VP<git-experimental> codec but luckily I was proved wrong.

There’s the second announcement of Really?Networks RMHD, intended for China (RealMedia was popular there after all). Either it’s their completely new codec (NGV) that has finally buffered 100% based on some original ideas or it’s H.265 ripoff. I’d bet on the latter.

Second, I’ve finally read a book describing upcoming AVS2 (again, intended for China and being a Chinese standard). Well, if the first paragraph describing it has such abbreviations as CU, PU and TU you may be sure it’s an original codec that has nothing to do with H.265. Coding concepts like variable block transform, splitting motion compensating block unevenly and having 34 intra prediction modes — those concepts are completely original and are not used anywhere else for sure. Of course there’s some Chinese logic involved in some decisions and thus codec has such gems ripped off HEVC like coding motion vectors in integer precision instead of quarterpel if they exceed certain limit or coding coefficients in zigzags of 4×4 blocks or having special treating for 64×64 blocks (this block is downscaled first and then transformed with conventional 32×32 transform — and they call it Logical Transform BTW) or special motion vector prediction mode for F-frames.

But that’s not all — they’ve introduced special “scene coding”. It relies on G-frames or GB-frames that contain scene background and it may be not displayed (who said VPx?!), and S-frames contain foreground motion. Though I’m pretty sure one can emulate it using H.265 features too, maybe longrefs plus no_display flag. I’m also pretty sure that if HEVC lacks some coding approach for now it will be added soon as a special extension (at least what I’ve read in screen coding extension looked completely logical — like a saddle as one of car seats).


Now I can be sure at last that codec future is looking good.

UPD: And there’s Cisco Thor now as well (simplified HEVC with VLC instead of CABAC). It does two things simultaneously — expands H.265 ripoffs family and borrows more from H.264. Now the only thing missing is Sorenson SVQ5 (or Double Spark or whatever name they want to give it).

On Greece

July 12th, 2015

I see too much bullshit about Greece in Internet these days, so much of it that I could not refrain from writing this post.

First of all, I come from a country with even worse economical situation (fun fact — the former Ukrainian ostrich supportedpresident complained how hard it’s to repay debts on his visit to Greece during the first Greek debt crisis). Unlike Greece most of people got no money from government, companies had large tax burden (in the latter years the government decided to press companies to pay taxes in advance and in amount decided by the tax inspection, tax returns working only for selected companies), lots of debts that went to no good purpose…

But enough about similarities between countries (certain Italians are not happy about similarities between Ukraine and Italy either), let’s get to the bullshit statements.

It’s not their fault. Of course it is, they had to forge their financial statistics under gunpoint in order to join and remain in Eurozone. Of course they share blame with Eurobureaucracy that wanted to extend EU even with a Greece and was willing to overlook their faults in order to keep it. Yet active part had been done by Greek government — it’s easy to buy voters with borrowed money that somebody else has to return in the future (in other words — not our problem). Another point of tension is Schengen area membership: because of good border control they have a lot of illegal immigrants and that’s what EU needs, hopefully when some neighbouring lands will connect Greece to the rest of Schengen area it will bring joy to everyone, especially to the UK.

The whole world is in debt to Greece for their achievements in culture and science. First of all, that sounds like typical copyright. “My grandfather once wrote a song that was played on a radio, I deserve not to work ever in my life.” (some Slashdot comment as I remember it from a decade ago or so). Second, most of the current countries have nothing to do with the nations that were on that territory a thousand or two thousand years ago. Look at Arab Republic Egypt — there was nothing Arabic in the people who built pyramids, temples and sphinxes. If you believe David Ben-Gurion’s thesis, then Palestinians are true Israeli people who lost their culture because of Arab conquests — they seem to oppose their original religion even to this day. Same story with Balkan nations and Ottoman Empire: modern Greece has nothing to do with the ancient Greece except in territory (say hello to Macedonia) and similar language. So, nice knowing you but don’t claim the old history to yourself; and while I’m grateful for those past achievements, they are not yours. I’d been living in a country that tried to exploit that (mostly in form of Soviet legacy and what colloquial “they” did for everyone), no thanks.