Archive for the ‘Useless Rants’ Category

A Call for Modern Audio Codec

Wednesday, February 11th, 2015

We need a proper audio codec to accompany state of the art video codecs, so here’s an outline of codec features that should be present:

  • audio codec should make more of its context, it should have a system of forward and backward reference frames like B-pyramid in H.264 or H.265;
  • it should employ tonal compensation with that — track the frequency changes from the references (e.g. it may be the same note continued or changing pitch);
  • time domain prediction via FIR or IIR filters;
  • flexible subdivision into subframes like binary tree;
  • raw (or non-transformed at least) coding mode for transients or noise;
  • integer only bitexact transform that passes for MDCT under bad light;
  • high-bitdepth sound support (up to 64 bits per sample).

The project name is transGhost (hopefully no Monty will be hurt by this).

And if you point out this is stupid — well, audio codecs should have the same rights as video codecs including PTS/DTS differences and employing similar coding methods.

Why one should not be overexcited about new formats

Saturday, January 10th, 2015

Today I’ll talk about Opus and BPG and argue why they are not the silver bullets everyone was expecting.

Opus

I cannot say this is a bad codec, it has modern design (hybrid speech+music coder) and impressive performance. What’s wrong about it? Usage.

The codec is ideal for streaming, broadcasting and such. It does not have special multichannel audio, you can combine mono and stereo Opus streams in whatever way you like and you don’t have to care about passing special configuration for it in a special way.

What’s bad about that? When you try to apply it to stored media all those advantages turn into drawbacks. There was no standard way to store it (IIRC Opus-in-TS and Opus-in-MP4 specifications were developed by people that had little in common with Opus developers although some of the latter were present too). There is still one big problem with an ugly hack as “solution” — the lack of keyframes in Opus and the “solution” in form of preroll (i.e. “decode certain number of audio frames before the needed one and discard them”). And not all containers support that feature.

That reminds me of MoosePack SV1-SV7. That was a project intended to improve MPEG Audio Layer II compression and make it a new codec (yes, there’s Layer III but that was one of the reasons MoosePack, Vorbis and other audio codecs were born). It had enjoyed some limited popularity (I’ve implemented MPC decoding support for a reason) but it had two major drawbacks:

  • very brief file format — IIRC it’s just a header and audio blocks prefixed by 20-bit size and no padding to byte either (if you’ve ever worked with raw FLAC streams you should have no problems imagining how good MPC format was);
  • no intra frames — again, IIRC their solution was to simply decode and discard 12 frames before the given one in hope the sound will converge.

MusePack SV8 tried to address all those issues by making new chunked format that could be easily embedded into other containers, its audio blocks could be decoded independently because first frame in it was a keyframe. But it was too late and I don’t know who uses this format at all.

Opus is more advanced and performs better by offloading those problems to container but I still don’t think Opus is an ideal codec for all cases. If you play it continuously it’s fine, when you try to seek problems start to occur.

BPG

This is quite recent example of the idea “let’s stick intraframe coding from some video codec into image format”.

Of course such approach saves time especially if you piggyback state of the art codec but it’s not the optimal solution. Why? Because still image coding and video sequence coding have different goals and working conditions.

In video coding you have a large amount of data that you have to (de)compress efficiently but mostly under specific constraints like framerate. While coding an individual frame is important it’s much more convenient to spend efforts on evening load for decoding all frames. After all, hardly anyone would like to have first frame to be decoded in 0.8s and other 24 frames in 0.1s. That reminds me of ClearVideo which had the inverse problem – intraframes were coded very simply (just IDCT+static Huffman) and interframes employed something fractal and took much more time.

Another difference is content. For video you usually have common frame sizes (like 1920×1080 or 1280×768) and actually modern video codecs are targeted for handling bigger and bigger resolutions. Images on the other hand come in various sizes, even ridiculous ones like 173×69, and they contain stuff you usually don’t expect to be in video form — pixel art, synthetic images, line art etc. (Yes, some people care about monochrome FMV but it’s a very rare case).

Another problem is efficient coding of palettised and monochrome images, lossy or losslessly. For lossless compression it’s much better to operate on whole lines while video coding standards nowadays are block-based and specialised compression schemes beat generic ones. For instance, the same test page compresses to 80kB PNG, 56kB Group4 TIFF or 35kB JBIG image. JPEG-LS beats PNG too and both are very simple compression standards compared to even H.261.

There’s also alpha plane coding, not so many video codecs support it because of its limited use in video. You have it mostly in intermediate codecs or game ones (hello Indeo 4!). So if selected video codec doesn’t support alpha natively you have to glue it somehow (that’s what BPG does).

Thus, we come to the following points:

  • images are individually coded while video codec has to care about whole sequence;
  • images come in different sizes, video sizes are usually few standard ones;
  • images have different content that’s not always well compressed by video coder and specialised compression scheme is always better and maybe faster;
  • images might need some additional features not required by video.

This should also explain why I have some respect for WebPLL but none for WebP.

I’ve omitted obvious problems with adoption, small-power hardware and such because hardly anything beats (M)JPEG there. So next time you choose format for images choose wisely.

A Codec Family Proposal

Monday, September 29th, 2014

There are enough general use standardised codecs, there’s even VPx family for those who want more. But there are not enough niche codecs with free/open specifications.

One of such niche codecs would be an intermediate codec. It’s suitable for capturing and quick editing of video material. Main requirements are modest compression rate and fast processing (scalable is a plus too). Maybe SMPTE VC-5 will be the answer, maybe Ogg Chloe, maybe something completely different. Let’s discuss it some other time.

Another niche codec that desperately needs an open standard is screen video codec. Such codec may be also used for recording webcasts, presentations and such. And here I’d like to discuss a whole family of such codecs based on the same coding principles.

It makes sense to make codec fast by employing multithreading where possible. That’s why frame should be divided into tiles that should be not so large and not so small, maybe 192×128 pixels or so.

Each tile should be coded independently, preferably its distinct features coded separately too. It makes sense to separate tile data into smooth features (like gradients and real life pictures) and sharp transitions (like text and UI elements). Let’s call the former a natural layer and the latter a synthetic layer. We’ll need a mask to tell which layer to use for the current pixel too. And using these main blocks and employing different coding methods we can make a whole family of codecs.

Here’s the list of example codecs (with a random FOURCC assigned):

  • J-B0 — employ JPEG for natural layer and GIFPNG for mask and synthetic layer coding;
  • J-B1 — employ Snow for natural layer coding and FFV1 for synthetic layer coding;
  • J-B2 — employ JPEG-2000 for natural layer coding, JBIG for mask coding and something like PPM modeller for synthetic layer;
  • J-BG — employ WebP for natural layer and WebP LL for synthetic layer.

As one can see, it’s rather easy to build such codec since all coding blocks are there and only natural/synthetic layer separation might need a bit of research. I see no reasons why, say, VLC can’t use it for recording and streaming desktop for e.g. virtual meeting.

On Railways Electrification

Sunday, September 21st, 2014

So what I’ve discovered today.

There’s a Schwarzwaldbahn going through Schwarzwald from Offenburg to Konstanz and there’s a station there — Villingen. That station bears a plaque that it had 10000th kilometre of electrification of DB network done in 1975 (DDR railways on the other hoof lost most of its electrification after the war because it was more important to electrify Soviet railways but that’s another story).

And there’s a branch connecting Villingen (Baden) with Rottweil (Württemberg) — unelectrified. And that branch has its own subbranch to Trossingen Stadt. That subbranch is also served by a diesel railbus. But unlike the branch it connects to it’s electrified! And that electrification is used only by museum vehicles from 1930s-1960s that are electric only (or in one case it’s a carriage with an electric locomotive).

On most such lines in Germany one usually has trains hauled by a steam locomotive or a diesel rail buses and the main traffic is electrified but in this case it’s the other way round. I have only one possible explanation — Württemberg.

P.S. Still it’s hard to find stupider situation with electrification than in Denmark. The only countries it has connections to had chosen 15 kV 16? Hz system. Denmark settled on 25 kV 50 Hz. But looking at their other railway-related decision (i.e. IC4) it seems logical.

P.P.S. For Ukraine the situation is sadder — once I was in Uzhgorod-Kharkiv train and it had to change locomotive twice because there are two electrification systems there (which make three areas). They claim it was done to better account for relief, i.e. different electrification for the flatter and mountainy regions. Hopefully there will be more two-system trains in the future (and there will be the future too).

On Quack VPx

Tuesday, September 16th, 2014

I think most of you have read this piece of news about G**gle VPx plans already. After some thoughts I’ve decided to comment on it as well.

So, here’s a bit of history:

  1. Duck TrueMotion — an original codec;
  2. Duck TrueMotion 2 — a development of TrueMotion 1 (same coding principles but now Huffman coding is employed);
  3. On2 TrueMotion VP3 — something like TrueMotion 2 and MPEG-2(aka H.262) mixed together;
  4. On2 TrueMotion VP4 — most likely some improvements over VP3 (shame on Mike and/or Peter for not REing it yet!);
  5. On2 TrueMotion (or was it TrueCast?) VP5 — MPEG-4 ASP/H.263 ripoff with On2-specific stuff (no B-frames, different coder etc.);
  6. On2 TrueMotion VP6 — minor improvements over VP5;
  7. On2 TrueMotion VP7 — H.264 ripoff with On2-specific stuff (no B-frames, different coder etc.);
  8. On2 TrueMotion VP8 — minor improvements over VP7;
  9. G**gle VP9 — H.265 ripoff with some On2-specific stuff (almost the same as in VP7/VP8);
  10. G**gle VP10 — is not released yet but I can predict it will be just VP9 with some minor improvements and no real specification available (you have Chromium source, just look at the stable branch there).

It is easy to see that there’s a huge issue to deal with if they want to release a new VPx every 18 months — they should have a corresponding ITU H.26x standard (or at least some draft of it) available. The only alternatives are polishing VP9 and calling it a new version when some incompatible feature is added or start ripping off Daala, Dirac and Bink 3. Good luck.

KBS 743

Friday, August 1st, 2014

I’ve not written anything about one of the crucial topics of this blog since ages, so here’s the long-awaited update.

Today I’d like to talk about probably the most interesting railway in Germany — Wutachtalbahn or Kursbuchstrecke 743 (Waldshut-Immendingen). It was build as a route to the South border of Germany that does not go on Swiss territory (the line along the Rhine it connects to goes through Basel and canton Schaffhausen).

Now, what makes it so interesting?

Despite being rather unimportant line nowadays and being about only 60km long (and there are no branches either!), it is operated by three different rail companies:

  1. northern part (Immendingen — Blumberg-Zollhaus) is operated by SWEG
  2. central part (Blumberg-Zollhaus — Weizen) is operated by WTB
  3. southern part (Weizen — Lauchringen — Waldshut) is operated by DB

Plan of the central part from Wickedpedia
(Image shamelessly stolen from Wickedpedia)

So you have three different companies running trains on approximately 20km tracks. Is it the same rolling stock? Of course not!

SWEG runs class 650 (aka Stadler RS1) diesel unit, Deutsche Bahn employs class 641 diesel unit and WTB runs a steam locomotive (Württembergische T.14 or class 52.80 or something similar) with bunch of outdated carriages from various places (like Switzerland).

And for unknown reason it’s nicknamed “Pig’s Tail Railway” (see the map above, I have no clue why) and the name somehow appeals to me.

I’ve visited it in three parts: one year I saw the middle part, next year I saw the north parth and later I saw the last part too. Curiously, while DB runs the most modern train the route itself seems the most outdated: the rails are uneven so you can get a bit seasick, the signal system is implemented by driver’s assistant with a red flag who stands on the crossing while the train passes it and it does not stand on the Weizen station for long because it has to give room to the WTB train (in result it comes to the station, waits a bit and cowardly retreats back to the track and waits there till the WTB train is gone).

In general I’d recommend visiting it if you happen to be there. If you want to see something better — go to Sweden and try Uppsala-Lenna railway, it’s the best (now I want to visit it again — oh wait, I wanted that before too).

Why I Shan’t Design a New Format

Friday, May 16th, 2014

Time from time I’m asked that question and since people can’t see why I’m not going to design a new format (even though the reasons are obvious) here’s the answer. Format in this context means both codec and container.

There are too many of them already. And they suck in different ways. And I believe it’s impossible to make format that will appeal to everybody so it will suck in some aspect. Either it will lack some features or will be too extensible that it will impose too much complexity on implementation. Lossless codecs are often written in such way that they require a special container because not even Matroska can encapsulate them properly. Lossless video codecs all offer about the same compression level and it’s law of diminishing returns in action (exponentially more time on compression yields only single percent of compression gain at best). Intermediate codecs sacrifice compression gains to speed. Advanced codecs are often some standard ripoffs (e.g. if progression keeps, VP11 will be based on H.266 but with multiple alternative reference frames and their peculiar binary coder). Containers suck either at complexity, compliance or flexibility. And there’s Ogg.

It is hard to write good tools for it. I have written some encoders and what I have:

  1. Zip Motion Blocks Video encoder (palettised) — I might be the only user;
  2. IMA ADPCM QT encoder — noone cares;
  3. M$ Video 1 encoder — got a nice review in 2009 and was merged as is into FFmpeg in 2011 just because. Probably noone cares about it either;
  4. AAC bitstream writer — it sucked so much that many talented people who tried to improve it afterwards just gave up and never returned to it again;
  5. ProRes encoder — for some reason it become popular and made me realize that noone caring about your encoder is a good thing.

Writing an encoder for a new format requires a lot of testing and tuning (especially for audio) and that requires both hardware and time which I lack. I had enough fun with AAC.

It is very hard to get adoption for the format. See previous two items. You should have good tools to interest users and there are too many different formats already to compete with. These are not the times when people were so desperate that they’d accept anything that was opensource and somewhat fulfilled their wishes (like Vorbis despite it being not hardware decoders-friendly and bundled with Ogg, or Matroska despite it being Matroska).

So, I shan’t develop a new format because it will take a lot of time from me with extremely little chances that results of that work will be ever used. Pity that lossless codecs creators didn’t think about it.

Utilite

Sunday, April 13th, 2014

Finally I’ve found some time to play with i.MX6-based Utilite which I intended to use as a home box for various stuff (like running fetchmail, irssi, simple web server etc. — in other words not desktop). So here’s a quick review:

  • does not work with my display (1920×1200, DVI input)
  • does not allow logging in via SSH (it refuses passwords) and the same problem with sudo later
  • does not have IPv6 enabled (not a grave problem but my provider has moved to IPv6 already)
  • serial port works as good as telegraph in magnetic storm (honestly, it gives all type of characters on terminal except the ones you can read let alone want, typing one character per minute is somewhat better)

I might be really old but this is not a development board (at least it’s positioned by desktop) so I expect it to work. And unlike previous product by the same company one cannot blame it on hardware — it’s i.MX6, not Tegra2.

So I’ve ordered Cubietruck already (I have Cubieboard2 at work and it has been running fine right from the start).

P.S. Raspberry Pi can go to hell.

A Bit More on Security

Thursday, March 27th, 2014

This is a translation of this page by unknown author. It’s rather old but recently I remembered it for some reason and decided to share.


Day One

A hacker comes to a diner and sees that salt shaker can be opened by anyone and anything can be put inside. The hacker comes home and writes a letter to the diner manager: “I, meG@Duc, have found a salt shaker vulnerability in your diner. A malicious person can open it and put poison there! Please fix it!”

Day Two

Diner manager gets that mail along with other correspondence and shrugs: “What an imagination”

Day Five

The hacker comes to diner and puts some poison into every salt shaker. Three hundred people are dead, a criminal case against manager is closed after three months because there was no crime from his side. The hacker writes a letter “see now?”.

Day Ninety Six

Manager orders special salt shakers with a combination lock. Diner guests feel that they don’t get something.

Day Ninety Seven

Hacker discovers that holes in salt shaker pass salt in both direction (and other substances too). He writes a letter and pisses into ever salt shaker. Three hundred people don’t come there ever again, thirty people went to hospital with poisoning. Hacker sends manager a SMS “How d’ya like it?”. Manager spends three months being interrogated and a year on probation.

Day 188

Manager vows never to work at any diner ever again and be a lumberjack instead. Engineers are working on one-way salt shaker design. Meanwhile waiters remove all old salt shakers and give salt on demand.

Day 190

The hacker steals a salt shaker and researches it at home. Then he writes a new letter to the manager: “I, meG@Duc, stole a salt shaker and find this outraging! Anyone can steal a salt shaker from your diner!” So far abstinent manager goes home and drinks vodka.

Day 193

The hacker discovers that all salt shakers are chained to the tables. He talks about his achievements at the next hacker conference and receives an award for protecting society and customers’ needs. At least manages doesn’t find this out.

Day 194

All hackers from the conference make a devious plan. They go to the diner and take all salt from shakers. meG@Duc then writes another complaint about low customer service and that anyone can deprive everyone else of salt.

Thus a new salt shaker design is needed. Engineers are working on it while waiters still give salt on demand. Manages goes abroad and uses room service only — no cafes, bars or restaurants.

Day 200

Customers discover that in order to get salt they have to call waiter, show their ID and get special 8-digit one-time code for a salt shaker. Repeat the same for pepper.

All Containers Suck

Tuesday, March 25th, 2014

It’s pretty obvious but I got requests to write this nevertheless.

All known containers suck, some of them suck gloriously, some of them plainly suck. And there’s Ogg Matroska Ogg.

There are several features that distinguish container usefulness:

  • flexibility (supporting various codecs and number of streams);
  • easy to parse;
  • well-defined specification (there must be a format with such thing);
  • metadata support;
  • low overhead (bytes needed to define frame size and other properties);
  • advanced features for insane people.

Now let’s review containers grouped by design.

Raw or raw with header. Those are the simplest and codec-specific. Besides being designed (usually) for only one stream and one codec, they often decide to save bits on frames and in result you have hard time implementing seeking (say hello to FLAC or Moosepack SV7). Some have seek table at least (old Monkey’s Audio has two — for byte and bit position).

Your favourite FLV belongs to this category — it has one audio and one video stream with no headers (and that’s why it has its own flavour of VP6 with frame dimensions stored at every frame) though one can abuse it to add a data stream. And of course some Chinese used it to store HEVC too in the stupidest way possible (for starters they have introduced half a dozen of different video codec IDs for it).

Chunk-based. The most popular category that refuses to go away. The best representative is RIFF (M$ ripoff of EA IFF format, there are many specific RIFF variants known — AVI, RMF, WAV, WebP) and runner-up is MOV/MP4. AVI is verily the pinnacle — flexible, extensible, every frame is its own chunk. What can go wrong with it? The usual thing: abuse. Too many idiots implemented their own AVI writers with whatever bugs they could introduce and it got even worse when codecs started to employ B-frames. Intel worked around by adding combined I+B-frame and dummy frame afterwards so decoder would handle it internally (you can see it both in Indeo 4 and their I.263). DiVX on the other tentacle… And variable framerate is not for AVI either (unless you simply use zero frames to define skips).

As for MOV/MP4 there seems to be a problem with parsing custom atoms (there are too many atom types around). And of course you have nice abuse like ASF packets stored inside MOV packets if you use Flip4Mac.

And if you replace chunks with an unholy mix of tags and UIDs you get MXF. That format doesn’t have a specification but rather a swarm of them so you don’t know which ones you’ll need to demux some file.

There’s NUT — probably the only format out there with two specifications and three or four implementations, each not agreeing with all other.

MPEG-TS inspired. MPEG-TS is one of overengineered container formats that nothing in this world would be able to demux a TS file with all possible features. And forget about seeking (unless you have an external index or build index yourself).

Of course such design inspired a lot of other formats that have some features of it but often those features are used without understanding why they are there. But result is good for streaming!!!1one

There’s ASF with crazy GUIDs for everything and fixed packet size (which means there’s no direct correspondence between ASF packet and stream packet anymore).

And there’s Ogg. Read this if you still haven’t.

Matroska. That’s a cancer — when you design container that should be able to contain everything and support any feature possible and it gets out of control you get Matroska. It’s based on binary XML, it can have any feature. And it stores every codec in its unique way — see what they call codec specs. So they save bytes here and there and demuxer should put them back, which is not nice, especially if you believe that demuxers and decoders do not need to know about each other.


If you wonder why I haven’t mentioned RealMedia, it’s because this format is an unholy mix of all categories:

  • Old RealAudio is rather simple raw + header;
  • RealMedia in general is chunk-based format (with a hack for B-frames even).
  • Video frame can be split into several packets or several frames can be merged into single packet, a lot like MPEG-TS inspired formats.
  • And they had mangled audio streams long before Matroska was here. Actually only some audio codecs data is stored as is, the rest is XORed or has permuted subpackets.