Archive for the ‘Useless Rants’ Category

Springtime for H.265 clones!

Wednesday, July 15th, 2015

Previously I feared there won’t be any H.265 clones beside VP<git-experimental> codec but luckily I was proved wrong.

There’s the second announcement of Really?Networks RMHD, intended for China (RealMedia was popular there after all). Either it’s their completely new codec (NGV) that has finally buffered 100% based on some original ideas or it’s H.265 ripoff. I’d bet on the latter.

Second, I’ve finally read a book describing upcoming AVS2 (again, intended for China and being a Chinese standard). Well, if the first paragraph describing it has such abbreviations as CU, PU and TU you may be sure it’s an original codec that has nothing to do with H.265. Coding concepts like variable block transform, splitting motion compensating block unevenly and having 34 intra prediction modes — those concepts are completely original and are not used anywhere else for sure. Of course there’s some Chinese logic involved in some decisions and thus codec has such gems ripped off HEVC like coding motion vectors in integer precision instead of quarterpel if they exceed certain limit or coding coefficients in zigzags of 4×4 blocks or having special treating for 64×64 blocks (this block is downscaled first and then transformed with conventional 32×32 transform — and they call it Logical Transform BTW) or special motion vector prediction mode for F-frames.

But that’s not all — they’ve introduced special “scene coding”. It relies on G-frames or GB-frames that contain scene background and it may be not displayed (who said VPx?!), and S-frames contain foreground motion. Though I’m pretty sure one can emulate it using H.265 features too, maybe longrefs plus no_display flag. I’m also pretty sure that if HEVC lacks some coding approach for now it will be added soon as a special extension (at least what I’ve read in screen coding extension looked completely logical — like a saddle as one of car seats).


Now I can be sure at last that codec future is looking good.

UPD: And there’s Cisco Thor now as well (simplified HEVC with VLC instead of CABAC). It does two things simultaneously — expands H.265 ripoffs family and borrows more from H.264. Now the only thing missing is Sorenson SVQ5 (or Double Spark or whatever name they want to give it).

On Greece

Sunday, July 12th, 2015

I see too much bullshit about Greece in Internet these days, so much of it that I could not refrain from writing this post.

First of all, I come from a country with even worse economical situation (fun fact — the former Ukrainian ostrich supportedpresident complained how hard it’s to repay debts on his visit to Greece during the first Greek debt crisis). Unlike Greece most of people got no money from government, companies had large tax burden (in the latter years the government decided to press companies to pay taxes in advance and in amount decided by the tax inspection, tax returns working only for selected companies), lots of debts that went to no good purpose…

But enough about similarities between countries (certain Italians are not happy about similarities between Ukraine and Italy either), let’s get to the bullshit statements.

It’s not their fault. Of course it is, they had to forge their financial statistics under gunpoint in order to join and remain in Eurozone. Of course they share blame with Eurobureaucracy that wanted to extend EU even with a Greece and was willing to overlook their faults in order to keep it. Yet active part had been done by Greek government — it’s easy to buy voters with borrowed money that somebody else has to return in the future (in other words — not our problem). Another point of tension is Schengen area membership: because of good border control they have a lot of illegal immigrants and that’s what EU needs, hopefully when some neighbouring lands will connect Greece to the rest of Schengen area it will bring joy to everyone, especially to the UK.

The whole world is in debt to Greece for their achievements in culture and science. First of all, that sounds like typical copyright. “My grandfather once wrote a song that was played on a radio, I deserve not to work ever in my life.” (some Slashdot comment as I remember it from a decade ago or so). Second, most of the current countries have nothing to do with the nations that were on that territory a thousand or two thousand years ago. Look at Arab Republic Egypt — there was nothing Arabic in the people who built pyramids, temples and sphinxes. If you believe David Ben-Gurion’s thesis, then Palestinians are true Israeli people who lost their culture because of Arab conquests — they seem to oppose their original religion even to this day. Same story with Balkan nations and Ottoman Empire: modern Greece has nothing to do with the ancient Greece except in territory (say hello to Macedonia) and similar language. So, nice knowing you but don’t claim the old history to yourself; and while I’m grateful for those past achievements, they are not yours. I’d been living in a country that tried to exploit that (mostly in form of Soviet legacy and what colloquial “they” did for everyone), no thanks.

LZ77-based compressors — a story similar to lossless codecs

Tuesday, May 12th, 2015

What do LZ77 compressors and lossless codecs have in common? They are both perform lossless compression and there are too many of them because everyone tries to invent their own. And like lossless audio codecs — quite often in their own container too.

In case you don’t know (shame on you!) LZ77 scheme parses input into pieces like <literal> <copy> <literal> ... Literal means “copy these input bytes verbatim”, copy is “we had that substring some time ago, copy N bytes from the history at offset M”.

The idea by itself is rather simple and thus it’s easy to implement some LZ77 parsing with the following coding, slap your name on it and present as some new algorithm. There are three branches of implementation goals there — fast (but somewhat decent) compression, high (but not so fast) compression and experimental research that may lead to implementations in the first two branches.

Fast compression schemes usually pack everything into bytes so no time is wasted on bit reading. Usually format is like this — if top three bits of the next byte are something, then read literal copy length, otherwise determine offset size, read it and copy string from the dictionary. Quite often there are small tweaks to make compression faster (like using hashes) or slightly better (using escape values to code long values and coding small offsets/lengths into opcode etc.). There are so many implementations like that and they still keep appearing. LZO, LZF, FastLZ, snappy, chameleon… And lots of old games used such compression for their resources (including video) too.

High compression schemes use much better compressing of the data produced by LZ77 parsing and spending more cycles on finding the best parsing of the input. It all started essentially with LZHUF when someone decided to employ Huffman codes instead of writing values in a fixed amount of bits. If you’ve never heard about LHA/LZH you need your Amiga box confiscated. This approach reached its peak with Deflate — by modern standards it’s not the best format to compress (i.e. not fast enough, does not compress high enough etc etc.) but it’s the standard available everywhere and in any form. Deflate uses custom per-block Huffman codes with their definition stored in compressed form as well so there’s hardly anything to improve there radically. And thus (patent expiration helped greatly too) another form of LZ77-based compression started to bloom — LZA (using modelling and arithmetic coding on LZ77 parsing results). Current favourite LZMA (and main RAR compression scheme) uses this approach too albeit in very sophisticated form — preprocessors to increase compression ratio on some kinds of known data, Markov models, you name it.

And here’s my rant — leave Deflate alone! It’s like JPEG of data compression — old and seemingly not very effective but it’s ubiquitous, well-supported and still has some improvement potential (like demonstrated by e.g. 7-zip and zopfli). I hate it to have as many compression schemes to support as video codecs. Deflate and LZMA are enough for now and I doubt there will be something significantly more effective appearing soon. Work on something lossy — like H.265 encoder optimisations — instead.

Some Travel Notes

Monday, May 4th, 2015

So I’ve finally visited the disunited state of Austria-Hungary and can share some feelings for those who like to read my travel notes (all zero people).

First, I’d like to talk about rail magazines that are present in InterCity or express trains in different countries. The ones I know are issued monthly and have national peculiarities (for starters, they are written in the national language). The one from Deutsche Bahn (German railways) covers a lot of different topics — culture, travel, some short story or an excerpt from one, DB plans, kids corner etc. ÖBB (Austrian railways) one is mostly dedicated to advertising Austria for tourists (and maybe a bit or two about neighbouring resorts to visit). TGV magazine (obviously French) is something in-between (not fully advertisements but not much serious stuff either) plus advertisements for night clubs. Yet it’s the only one of three that features a scheme for IC and TGV routes. And the best one is of course Kupe from SJ (Swedish railways). It has articles on various topics and it also includes things close to my heart: a full map or Swedish railways (I need to travel more there!), SJ fleet description (I like to ride all those kinds of trains plus Inlandsbanan’s Y1, SL X60 and X10 and I definitely need to go to Lennakatten again!) and the most important thing — a page where locomotive driver (it was Peter and now Jenny) answering railway-related questions (e.g. what’s the difference between trains like X2 and X40, what’s the longest route they have to travel, why train goes slowly sometimes etc.). Anyway, back to actual travel.

For Hungarian part I’ve visited Budapest. If you ignore the river, buildings in the centre and people it looks and feels like Kharkiv. The same neglected buildings (often in the same architectural style), the same neglected streets. The transport is verily the same — Tatra trams, Ikarus buses, even underground rolling stock is the same and even painted the same! Heck, even most people I talked with there were from Kharkiv. And their suburban rail lines (like H5, H6 or H8/H9) are shaky as Ukrainian roads.

Also as I’m, to speak politically correct, a fat cripple I really appreciated how lines are connected there — you often have to cross a road or use an underground pass without any elevators. Tram routes are so well designed that they simply end somewhere in the middle of the street with no loop to turn around. And the airport reminds of Kharkiv too — it’s connected only by a bus (on an Ukrainian-grade road), they check your documents thoroughly. The only difference that in Kharkiv airport I had never had to take off my shoes on security check. At least after visiting it I don’t have a desire to go back to Ukraine (not that I had it before…).

Austrian part is represented by Innsbruck. It’s a stereotypical town in Austrian Alps. Transport system is rather strange — trams have numbers like 1, 3, 6, STB and buses have numbers like D, H, LK, O or TS. For skiers there are Alps with funiculars all around the town, for idiots who believe that fake should cost more than real there are tours to Swarovski, for me there was a museum of local rail lines (that means both local trams and railways in different part of Tirol including Italy). Museum ticket also gives a right to get a ride on museum tram around the town. While the museum by itself is small (only two rooms with mostly photos and plans) it also has a depot full of museum trams from probably 1920s to 1970s (that feeling when you see DÜWAG GT6 only in a museum while they are still common here). Two tram lines (6 and STB) go into the mountains, at least STB being one-track there with passing loop on some stations (and trams take left track there like on proper railways). One of those stations surprised me by having an emergency broom tied to the pole there.

It’s also worth noting that there are two rivers flowing through Innsbruck — Inn, obviously, and Sill. I don’t care what it means for them, I know what it means for me — salt water herring in Swedish and that’s what I was thinking about.

Overall, Innsbruck looked nice and a bit like Bavaria, I honestly expected it to be worse (mostly because of Austrians I know). And understanding German is much easier than understanding Hungarian unless you’ve been born one. It’s worth visiting again sometime.

NihAV — A New Approach to Multimedia Pt. 5

Saturday, April 25th, 2015

Structures and functions

The problem with structures in libav* is that they are quite often contain a lot of useless information and easily break ABI when someone needs to add yet another crucial field like grandmother’s birthday. My idea to solve some of those problems was adding side data — something that is passed along the main data (e.g. packet) and decoders don’t have to care about it. It would be even better to make it more generic, so you don’t have to care about enums for that either. For instance, most of the codecs don’t have to care about broadcast grade metadata (but some containers and codecs like ATSC A/52 provide a lot of it) or stupid DVD shit (pan&scan anyone?). So if demuxer or decoder wants to provide it — fine, just don’t clutter existing structures with it, add it to metadata and if consumer (encoder/muxer/application) cares it can check whether such non-standard information is present and use it. That’s the general approach I want to have quite similar to FCC certification rule: producers (any code that outputs data) can have any kind of additional data but consumers (code that takes that data for input) do not have to care about it and can ignore it freely. It’s easy to add options marked as essential (like PNG chunks — they are self-marked that you can distinguish chunks that can be ignored from those that should be handled in any case) to ensure that this option won’t be ignored and input handler can error out on not understanding it.

As for proper function calls — Luca has described it quite well here (pity noone reads his blog).

NihAV — A New Approach to Multimedia Pt. 4

Friday, April 24th, 2015

On colourspaces and such

I think current situation with pixel formats is brain-damaged as well. You have a list of pixel formats longer than two arms and yet it’s insufficient for many use cases (e.g. Canopus HQX needs 12-bit YUVA422 but there’s no such format supported and thus 16-bit had to be used instead or ProRes with 8- or 16-bit alpha channel and 10-bit YUV). In this case it’s much better to have pixel format descriptor with all essential properties covered and all exotic stuff (e.g. Bayer to RGB conversion coefficients) in options. Why introduce a dozen IDs for packed raw formats when you can describe them in uniform way (i.e. read it as big/little-endian, use these shifts and masks to extract components etc.)? Even if you need to convert YUV with different subsampling for chroma planes (can happen in JPEG) into some special packed 10-bit RGB format you can simply pass those pixel format descriptors to the library and it will handle it despite encountering such formats for the first time.

P.S. I actually wrote some test code to demonstrate that idea but no-one got interested in it.

NihAV — A New Approach to Multimedia Pt. 3

Friday, April 24th, 2015

More on codecs handling

First of all, people are often AVI-centric and decide that you can always use 4-character code to identify a codec. Well, technically it’s true because there’s significantly less than 4 billion codecs in existence (I hope). The problem is uneven mapping — MPEG containers use integers for codec IDs, AVI uses 4-character code for video and 2-byte integer for audio, MOV uses 4-character code for both audio and video, Matroska uses long strings like V_MPEG4/MS/V3 etc etc. So in any case you have a problem of mapping codecs found by demuxers to internal decoders. In libavcodec it’s handled by having an insane enumeration of codec IDs and I’ve mentioned in part 2 that I’m not a fan of such approach.

So what I suggest instead? A global registry of codec names in string form. And splitting out media information database explicitly. After all, why not provide some codec information even if we cannot support it? Less effort when you add a new decoder and you can query some information about codec even if it’s not supported. Demuxer maps internal ID to codec name (if it can), codec database can be queried about that codec at any time to see what information is known about it and a decoder can be requested for that codec as well.

Here’s an example:

  1. Bink demuxer encounters KB2g;
  2. It reports binkvideo2 decoder;
  3. (optional) From database one can retrieve its name — “Bink Video 2”;
  4. A decoder for binkvideo2 is requested for it but that request is failed because noone has bothered to write such decoder;
  5. Or a decoder implemented by a special plugin that calls TotallyRADVideo.dll is called.

Just replace enum with string and you get better flexibility and only VideoLAN won’t like it.

NihAV — A New Approach to Multimedia Pt. 2

Thursday, April 23rd, 2015

Common design principles

I’d been participating in FFmpeg and then Libav development for about ten years and I’ve touched many parts of its codebase except for libavfilter and libavresample, so I know what I dislike in its design.

Enumerations. Maybe people like them but I think it’s much better to have list of string identifiers instead. You still specify codec or format or protocol by name in command line, why should code have that bulky and incompatible enumeration? It would be more convenient for library user to use string identifier — you try to find format handler for a given name and if you don’t have it or its support is disabled then no luck (of course VideoLAN prefers enums but that’s their problem).

Large pointless structures. AVCodecContext and AVFrame are good examples of that (especially the old versions). They lug around many members that are applicable only to very limited subset of video codecs and nothing else. A much better approach IMO would be to have substructures with minimal information needed for all audio/video/subtitle data (both in frame and context) and the rest is put into dictionary (maybe as subobjects, like motion information or rate control information structures).

API variations. Current approach is to shoehorn everything into specific structure. My opinion is that public functions should take as flexible (or simple) input as possible and do the same with output. For example, why have avcodec_decode_video2(), avcodec_decode_audio4() and avcodec_decode_subtitle2() if single function is enough? You feed input bytes and you obtain output bytes — no matter what you actually do (encode, decode, filter or pass through). Anything optional should be passed as optional — in a dictionary for example.

Various stuff. Parsing, probing, timestamp handling. All these things need to be reinvented because it’s hard to imagine them being much worse than they are or were a couple years ago.

I’d also like to have some small building blocks for codecs. In libavcodec many video decoders were forced to be built around MpegEncContext and noone likes that structure (except one guy who even named a video player after it but then again he doesn’t want to disclose his real name…). I prefer to have more independent decoders reusing the same methods somehow (e.g. this codec needs this frame management, this motion compensation). How to implement it, boost::codec::video::block_decoder templating and macros or function pointers for codec-specific functions (like block decoding) is yet to be conceived.

To be continued eventually…

NihAV — A New Approach to Multimedia Pt. 1

Thursday, April 23rd, 2015

Foreword or Why?!?!

There are two curses in program design (among many others) — legacy and monolithic design.

Legacy means two things: first, there is such thing ask backward compatibility that you (sometimes have to maintain) or the users will complain about broken APIs and ABIs; second, there’s code legacy, i.e. decisions taken in the past that are kept for some reason (e.g. noone understands how it works). Like AVI demuxer in libavformat containing special cases for handling specific files that noone has ever seen.

Monolithic design is yet another problem that creeps into many projects with time. I don’t know why but quite often code gathers itself into intangible chunks and with time those chunks grow and get uglier. Anyone worked with FFmpeg might take a pleasure looking at mpegvideo in libavcodec, libswscale and libpostproc (especially if you look at the versions from about 2010).

So there are two ways how to deal with it — evolution (slowly change interfaces in hope to be better one day, deprecate stuff etc.) and revolution (simply forget it and write a new stuff from scratch).

In this and following posts I’ll describe a new framework (or whatever buzzword applies here) NihAV (Not-Invented-Here Audio-Video). Maybe I’ll even implement it for my own needs and the name should hint how much I care about existing design decisions.

Decompilation Horror

Saturday, April 18th, 2015

In the old days I found PackBits (also DxTory) decoding routine monstrous. That Japanese codec had a single decoding function 349549 bytes long (0x1003DFC00x1009352D) and that was bad style in my opinion.

Well, what do you know? Recently I’ve looked at AMV3 codec. Its encode function is 445048 bytes long (0x10160C200x101CD698). And decode function? 1439210 bytes (0x100011500x1016073A)! I’ve seen many decoders smaller than that function alone.

There’s one thing common to those two codecs that might explain this — both DxTory/PackBits and AMV3 are Japanese codecs. It might be their programming practice (no, it’s not that bad) but remember that other codecs have crappy code too for other reasons. And some of them actually look better in compiled form than in source form (hello there, Ad*be and Micro$oft!). Yet I find it somewhat easier to deal with code that doesn’t frighten IDA (it refuses to show those functions in graph form because of too many nodes and maybe I’ll run decompiler on decode function in autumn – because it will keep my apartment warm till spring).