A Tale of Two Failed Projects

June 25th, 2016

Yes, it’s about FFmpeg and Libav again. And yes, I consider them both to be failed projects (not that their basic goal is failed and they provide even less multimedia support than GStreamer with no external libraries used), I mean the state of the project as living and developing entity.

Even if I mostly emulate Derek nowadays—i.e. unsubscribed from FFmpeg and Libav mailing lists, do nothing productive, wait for somebody to reverse engineer codecs I care about somewhat (that would be ClearVideo, thank you very much). Yet I peruse development-related resources for both projects (mostly for finding laughs) and sometimes I see gems like this (it was pointed at in the comments as well since it answers some questions I’ve asked before).

First, I’d like to outline how large projects are organised and what to expect in general. So, if you have a large and used project you’ll have at least these components:

  • codebase (normal projects have some code to run after all);
  • developers (to add features, fix bugs and such);
  • users (to annoy developers and once in a while to provide sensible bugreport or feature request);
  • infrastructure (hosting for code, means to communicate for developers, maybe even support for users).

Developers can be also divided into three main categories:

  1. core developers—the ones who do main work on the codebase and do it in regular manner (they might intersect with the next category too);
  2. corporate developers—the ones who do work mostly on behalf of their companies (e.g. add a feature they need internally so they don’t have to maintain it themselves);
  3. contributors—developers who add some feature or provide some bugfix because they needed it themselves, they do it irregularly or even just once (again, they might intersect with the previous category).

This division is by no means perfect but it shows the main forces behind development: those who treat is as a hobby, those who do it for their benefit (i.e. making money with/from it) and those who use it and just want to be it a bit more suited to their personal needs.

So, with that all in mind let’s look at the projects:

FFmpeg

Codebase. It’s a complete mess. And its git history is even worse. The running joke is that who cares what that piece of code does, it’s FFeature so it must be kept at whatever cost (that’s how you get double decoders, demuxers and encoders; an outstanding example there would be libutvideo wrappers—refer for the details to ffmpeg-devel mailing list).

Developers. Because of the merging policy (that is likely to be codified soon—see this document again) many developers of FFmpeg code are not FFmpeg developers. And yet they are dictating API to be used in FFmpeg: the first example that also involves me—I’ve proposed side data for packets in Libav, FFmpeg hesitated for a bit yet included it with such flattering message; the most of examples include Anton’s work from introducing refcounted buffers to splitting codec parameters into separate structure—in any case FFmpeg simply takes it and converts their code to comply with a new practice (even if it has to include some horrible hacks). If that doesn’t cry out loud “a failed project” I don’t know what does.

Also (even if I’m stepping onto minefield) some FFmpeg developers are completely unfit for collective work because of their personal qualities. People may make jokes about providing full console output of ffmpeg command but it’s not Carl who’s the main problem in FFmpeg (yes, people who didn’t work on MPlayer might think otherwise; I still believe he’d be a decent leader for FFmpeg—mostly because he doesn’t focus just on technical side and he’s unlikely to be treated as a technical god who can’t make any mistake or write less than perfect code). Here it’s more about Michael and Clément—the former never really understood what being a leader really is or what resigning from a leader means (anyone disagreeing please ban yourself from a mailing list of your choice for 24 hours), the latter does not understand people at all (neither does Michael)—I’m not going to paste the link to the same document for the third time, I’ll simply quote the relevant part:

Any Libav developer is of course welcome anytime to contribute directly to the
FFmpeg tree. Of course, we fully understand and are forced to accept that very
few Libav developers are interested in doing so, but we still want to recognize
their work.

Here’s an excerpt from Michael’s mail:

> Don’t you think you should remove Diego, Måns, Kostya, … as well?

They didnt ask me to remove them, they didnt remove themselfs even
though they could, they didnt post a patch to remove themselfs.
No contributor said that he contacted them and they no longer maintain
the code they are listed for. (or i missed that)

Well, if it’s hard to realize that Libav developers don’t want to contribute to FFmpeg and don’t want to do anything with it even though it’s been over five years then you really have a problem. And I’ve expressed my thought on reuniting both projects already.

Users. You know, there’s a difference between catering to your users and selling out completely (to put it mildly). When you see some changes being done in interests of some third party often without mentioning it that looks suspicious. I’m not against making money off your work but when it’s not even mentioning the fact it looks strange; when you have a decoder with a copyright assigned to some company it’s fine, but when you have fixes for files nobody has seen or FFv1 features added because it was all paid by somebody (see here slide 12) it looks not completely honest even if there’s nothing wrong with it.

Infrastructure. From what I understood FFmpeg services are now hosted on various boxes with no plan or idea (i.e. if somebody could provide a box for something they took it) and there’s no system administrator for these boxes. Again, as I understand it, they were kicked out of Hungary for some reason and even though they got a free server and hosting in Bulgaria they cannot use that box properly because there’s nobody to set it up properly and maintain afterwards. Sounds like fail to me.

Libav

This project is failed for the different reasons but failed nevertheless.

Codebase. While it’s mostly fine sadly new features hardly come in. Just two examples—there have been talks about replacing libswscale since ages, two years ago they’d started to design it (and it went nowhere), then I offered my design with a PoC (yes, piece of that) code to test it (that’s how NAScale was born), people work on integrating it into Libav a bit and that’s all—nothing has happened yet; the second example is bitstream reader replacement—since its submission in April nothing has come out of it as all traction was lost in bikeshedding. Is it failure or what?

Developers. Here we have two problems—some FFmpeg folks and some core developers. I’ve written about the former before so let’s talk about the latter. Surprisingly or not there are counterparts for Austrian FFmpeg developers in Libav. Where in FFmpeg you have Carl Eugen, in Libav there’s Diego and I guess many have suffered from his perfectionism (in form of proper formatting). And instead of Michael there’s Anton. While he is not that leadery in general sense, he’s the one introducing big changes in API that are hardly discussed before. And even worse thing—he tries to make all nontrivial code go through him, QSV support is a good example: Maxym Dmytrychenko had submitted initial support but it was not deemed good enough so Luca Barbato had to rework it into proper form. And what do you know? It turned out to be not good enough for Anton so he worked on it himself with the result not being much different from Luca’s. And since nothing is being done about that I consider it to be a failure.

Users. Sadly, there seems to exist not so many of them which is a fail. On the other hoof they don’t need to deal with distros and Baidu and that’s a blessing by itself. Though there is still an issue with FFmpeg users who bother (ex-)developers for features present in FFmpeg but not in Libav (or present in different form), like Blackmagic card support or prores_ks encoder (hint: there’s no encoder with such name in Libav and it’s my personal pleasure to ignore mails about it).

Infrastructure. From what I heard thanks to Attila and Janne everything is working fine.


Well, maybe I should continue with Actimagine VX codec at last and forget about multimedia outside work matters afterwards (insert the obvious joke about this not hurting NihAV development at all).

A Quick Look on Perseus

June 21st, 2016

So, unlike those breakthrough codecs everybody talks about (I mean RMHD and ORBX.js), V-Nova Perseus was delivered (but what do you expect from a codec announced on the first of April?) and is available in some Android app. So I’ve looked at it.

The implementation seems bafflingly simple: there’s a base layer, it gets upscaled 2x and an enhancement is applied to the upscaled image. And those enhancements are essentially quantised differences after 2×2 Haar transform plus runs, all coded with context-dependent Huffman codes. If that reminds you of RealVideo—don’t worry, they code those codebook descriptions too so it’s different.

I don’t know if it really works as good as promisedmarketed but it’s an interesting approach and it introduces some variety in the world of codecs that look alike—mostly because they all use the same principles as the standard video codec with some small enhancements or building blocks replaced with functional analogues; yes, I completely forgot about Daala, please remind me about it when they settle with final design—it might be the codec of choice for GNU HURD NG by then too.

On H.264 Coding Schemes Names

June 3rd, 2016

Continuing the theme set by the previous post, let’s talk more about confusing names introduced by H.264. I mean CAVLC and CABAC.

CAVLC stands for Context-based Adaptive Variable Length Coding. While technically true because it employs variable-length codes and the code set is selected based on context it’s nothing special (and I’ve not spotted anything there that would make it “adaptive”). Again, it’s a trivial thing less exercised before because they had less ROM for codebooks. The idea of “let’s select codebook depending on top and/or left decoded values” it too trivial to get an own name IMO.

CABAC stands for Context-based Adaptive Binary Arithmetic Coding and the name is partly stupid and partly misleading. But before I explain why I want to present some history and terminology.

Arithmetic coding was developed in late sixties to early seventies but mostly known by work of Rissanen and Langdon that resulted in many IBM patents. The idea is that you can assign probabilities to various symbols, send them to the coder and the coding result is a long fraction belonging to the range obtained by multiplying the ranges in sequence. I.e. if we have probabilities for A, B and C as ranges [0; 1/3), [1/3; 2/3) and [2/3; 1) then AB is coded in [1/9; 2/9) range and BA is coded in [1/3; 4/9) range. It’s the ideal coding method since it codes probabilities in the minimum possible amount of bits (unless you remember it’s real world and we don’t have infinite-precision arithmetic; still, the losses are very small and there’s no better coding method).

And in 1979 G.*.*. Martin (no, not the writer known for Tuf Voyaging) introduced range coding. Which is absolutely the same thing except that (de)coder maintains low and range values instead of low and high values in a conventional arithmetic coder (hence the name). Since it was kinda not covered by patents it got more popularity over the years.

And because dealing with arbitrary probabilities usually involves division by an arbitrary integer (and maybe increased coder precision) the further improvements were for sacrificing efficiency for speed until it boiled down to coding just two symbols and creating more elaborate models to code input that takes more than one bit. Arithmetic mode in JPEG seems to be simply feeding bits from Huffman codes and such to Q-coder (patented by IBM and thus extremely popular in the wild) that squeezes a bit more entropy out of them. Then you create an advanced version (MQ-coder) and push it into JPEG-2000 until binary coding is popular in image and video coding.

So, CABAC is:

  1. Context-based — yes, static coding would be a tad more effective than Huffman coding applied to bits (hint: it gives no savings). The problem that it’s the first step of the classical scheme: modelling and providing probability to the entropy coder;
  2. Adaptive — see above;
  3. Binary — true (just remember it codes not bits but most and least probable symbols);
  4. Arithmetic — actually it uses range coding;
  5. Coding — nothing to argue with here.

In general, the naming is a lot like USSR which was hardly a union, probably not soviet (whatever that word means—literal meaning is “belonging to the councils”), republics were just provinces (or local despoties) but it was more or less socialistic (to the point its ideology can be called international socialism and it was founded by SDAPR(B) too).

And I’d like to point out that CABAC should refer to the whole process of binarisation+context selection plus coding the result, not just the exact implementation used in ITU H.264 and ITU H.EVC (even if it’s called CBAC in AVS and “we have completely different coding” in VPx). And if you want an example of context-based adaptive binary non-arithmetic coding look at ELS used in G2M2 (and if you drop binary coding then you have examples in every other advanced lossless audio codec).

On Variable-Length Codes Names

May 31st, 2016

After seeing the recent commit in Libav this rant simply wrote itself.

People, Solomon Wolf Golomb was a genius whose work influenced various areas of science (I’ve read about his work in Martin Gardner’s books plus some of his papers) but please stop attributing to him stuff he did not invent. I’m talking about universal variable-length codes for integers (or Xine for short).

He has introduced (in late sixties) a specific kind of Xine for optimal coding of integers with certain distributions (I’ve recommended to read the paper introducing them before, it’s awesome and I wish more papers were written like that one). Those codes have a parameter k that is distribution parameter and also it’s used to split code into two parts—an unary prefix coding N/k and log2k bits coding N%k (for some values it’s rounded down, for another it’s rounded up). Later Robert Rice had a similar idea and independently introduced codes that were Golomb codes with parameter 2^k (and thus they’re often called Rice codes and they’re used more because they are easier to manipulate on computer). And that’s all—there are no other Golomb codes.

Yet thanks to ITU H.264 standard (aka GNUMPEG-4 AVC) we have exp-Golomb codes and interleaved exp-Golomb codes. I don’t know who decided on the name but it’s misleading and wrong (but because it’s in the standard people insist on using it; that also shows how much people designing codecs know about general compression methods). Maybe if some other Xines were rediscovered they’d go under equally ridiculous names like geo(metric)-Golomb or norm(al)-Golomb or quad-Golomb or recursive Golomb codes (because people have never heard of Levenstein coding).

Again, back in seventies Peter Elias proposed a scheme for coding: let’s call unary code (i.e. the one that codes an integer as a series on one value terminated with the other value like 000001 or 1111110) alpha code and fixed bit representation of an integer beta code then we can arrive to gamma code that combines both.

So “exp-Golomb” code is really Elias gamma code? NOPE! Like with TV interlacing came first and actual Elias gamma code is what is incorrectly called “interleaved exp-Golomb” (i.e. first you have flag bit telling whether the code is over or there are more bits left, then data bit, then flag bit again, rinse, repeat). And progressive version is Elias gamma prime has unary prefix i.e. alpha code for the length of the second part concatenated with beta code for that part (I’ve rechecked the original paper freely available at sci-hub—because the only time I paid for IEEE papers access was when my scientific supervisor sent me with money to pay for IEEE membership renewal for our chair). Then you can construct delta code that codes integer value in three parts (actual bits, their length and length of the length part) and jump to omega codes that code mostly lengths of the following length part (very meta!).

So there’s another thing to dislike in H.264 standard beside all interlaced modes and scalable and multi-view coding, it’s forcing a wrong terminology on the world (feel free to correct me if there were earlier uses). The same way there are confusions between arithmetic and range coding, various binary coders are not free from it too. But that’s a rant for another day.

On Italian Literature

May 28th, 2016

One cannot be called a true reverse engineer unless he tried (and failed) REing Italian literature collection. I’ve finally tried it (and, obviously, failed).

What’s so special about it? Here is Mike’s description. From what I’ve seen on the first CD videos occupy 280MB out of total maybe 300MB (and over 200MB of it is a single tutorial video). While the actual library data occupies about five megabytes there.

The main library application is written in Visual Basic 4 (16-bit version) and it’s not a compiled version but rather P-code and I’ve failed to find a decompiler for that exact version (32-bit? seems no problem; 16-bit or even 32-bit Visual Basic 3? also no problem; 16-bit VB3? keep searching). There are some utility apps of unknown purpose there written in Borland Delphi (also 16-bit and I’m pretty sure it was simply Borland Delphi then, no additional versions needed). And while those are in sane machine code (well, 16-bit x86 machine code is hardly sane but manageable) there’s a lot of Delphi cruft compiled in with TThis and TThat and TOtherThing and such (plus additions in Italian).

Despite files having extension .LZ[1-3] I doubt they employ any kind of Lempel-Ziv compression, I’d expect some different dictionary-based scheme (you have an index with all possible words after all). And looks like they’ve licensed some DBT thing (obviously it stands for Text DataBase in Italian) from some Italian Institute of Computational Linguistics and this DBT is responsible for the file formats but I’m too lazy to RE those half-megabyte .dlls without a decompiler (written in Delphi too).

A Quick Look on DLI

May 26th, 2016

So yesterday I had a quick look on DLI image format. It turns out to be somewhat related to video codecs (and JPEG of course): there’s 8×8 fast integer DCT approximation, quantisation and bit coding of the block. And bit coding is the most interesting part really—this format employs binary model with old-school arithmetic coding and context selection for the model; coefficients are coded as first an array of coded coefficients flags (plus a flag for last coded coefficient), each non-zero coefficient has an additional flag to signal whether it’s larger than one and in that case it’s coded as unary code (bits coded with arithmetic coder of course).

And I still don’t like the notion of “let’s make our video codec I-frames into an image codec” (the reverse of “motion <image format>” is not much better but at least it makes sense for intermediate formats). Images and video codecs have different use cases and required features but I think I ranted about it once.

When Old Will Beat New Again?

May 19th, 2016

Since my previous post hasn’t brought me answers I sought here’s another philosophical (i.e. no answers again) post on question that bothers me.

The concept is rather simple: some old tricks and methods become more appealing over the time when other more competitive methods lose their traction. So I often wonder when those old methods, approaches and tricks will become relevant again.

For instance, quadtree coding was not popular some time ago and yet we see it again in codecs where it handles blocks of smaller sizes inside some coding unit (ITU H.EVC, VP9, AOMv1—you name the codec). There’s similar story with vector quantisation—it still lives in some GPU-assisted form and is interesting again.

Now let’s talk about classical arithmetic coding. In the flow of time it was mostly supplanted by some variation of binary coding. But with the time binary coding becomes more and more unwieldy since you have to code bits with different contexts and you often don’t code bits per se but rather bits for some variable-length code for integers. So I wonder if the classical arithmetic coding may come to use again and return saner coding while still being faster? Of course one could point me to One Xiphophorus, the company that made the best VP3 encoder, since they’ve found this approach worked fine in Celt and should work fine in Daala (unrelated to them: is FFA1 still a thing?). But really, is CABAC/boolean coder still the coder(s) of future or we’ll see more interesting things from the past? And yes, I’m aware how rANS can be used for faster coding of probabilities and that ANS is used in VP10 experiments. But what about, say, better modelling with, say, order-10 contexts (or the ones that take parameters from both neighbour blocks and blocks upper in hierarchy into account)?

And another one is not related to my usual stuff but is still quite interesting. Will raytracing return again? From what I know the current way is to have lots of triangles, lots of textures, lots of crazy additional maps and lots of even crazier shaders. I believe it went this way:

  • let’s approximate everything by triangles and draw them;
  • simple colours are not good enough—let’s add textures;
  • not good enough—let’s add shading (like Gouraud or Phong);
  • not good enough—let’s introduce bump maps for better realism;
  • not good enough—let’s introduce light maps;
  • not good enough—let’s introduce computable shaders;
  • still not good enough—let’s render scene once, calculate different parameters from it, create new light/shadow/whatever maps, add them to the scene and rerender again;
  • you know what, it’s still not good enough—let’s …

(I don’t know much about computer graphics since our university course didn’t went much farther than Bresenham’s line algorithms and simple image formats)

With all this trickery you still have not achieved realistic picture especially when it comes to dynamic light, shadows and reflections. Yet during all this time there was raytracing which is simple as hell (and equally slow): you have a scene and for each pixel you simply trace its path until you end in some light source or simply give up. With massive parallelism of GPUs and complex shaders it looks to me that switching to raytracing might be easier (sure, there’s a problem of legacy, making all those developers switch from Magma and Vulcan to a new approach etc etc) but I still wonder if it makes sense from technical point of view or will make in the near future.

And as usual—I hope for the answers but I don’t expect that I receive any.

Schizophrenia in Open-source Projects

May 14th, 2016

Disclaimer: the word “schizophrenia” is used here as it’s perceived by majority, not to denote a certain psychological condition. Feel free to be offended.

I’ve wasted about a decade working at two multimedia projects (plus a patch or two to unrelated projects) and what I’ve seen there leads me to the conclusion they both suffer from schizophrenia, albeit in different forms.

FFmpeg

FFmpeg features two forms of schizophrenia—developers and code.

Developers schizophrenia can be seen in how some developers believe they are also Libav developers. Mostly they brag because they’ve sent a patch or two to Libav and now can use it as a free review service. While I dislike Carl Eugen he’s at least honest and acts to his beliefs (here, an amended Elenril’s Law fulfilled; in case you’ve forgotten it says “Every FFmpeg-related discussion ends up mentioning Michael. Or Carl Eugen.”).

Code schizophrenia is more celebrated. The most prominent example is ProRes support—they offer two decoders and two encoders for it. There are two ASF demuxers as well. And two audio resampling libraries. And there are talks about adding second libswscale (*shudder*). The best part is that if you ask why it will probably go like this:

— Why do you have feature X in two versions?
— Because Libav has it.
— But why do you take it if you have your own version?
— To make merging Libav codebase easier.
— But why do you need to merge it at all?
— To make merging Libav codebase easier.

Please please tell me I’m wrong and provide proper reasons why FFmpeg keeps merging Libav stuff and keeps several versions of the same feature.

Libav

Here it’s somewhat more interesting—you have developers with physical multiple personalities disorder. Unlike the case in FFmpeg here you have people that work solely on Libav but as several different people. Most prominent examples are Luca (known as lu_zero and koda on IRC) who is really several instances not always agreeing with themselves (and if you subscribe to Lu_zianism then he’s also both Michael and Carl Eugen too, if you don’t believe it—you should because it annoys him/them). And there’s also Alexandra (aka beta elenril) and Anton (aka sasshka 2.0). And that’s the majority of core Libav developers anyway.

But at least the project seems to be more happy with itself and probably has a dream similar to the Ukrainian dream (which is “bugger off you damned Russians” in case you didn’t know).


It was somewhat fun to watch the fate of proposed bitstream reader replacement. Alexandra (she’s also Top Libav Blogger ?2 by the way—simply because she blogs) proposed a new bitstream reader to replace the old horror (which is a good idea), and that new bitstream reader turned out to be faster than the old mess too, and what was the result? If I’d be British I’d call it sheep-worrying.

Those developers from FFmpeg that believe they also should have a say in Libav process started to express their opinions. While there were independent benchmarking proving the new implementation is indeed faster (which is a good thing to provide) those benchmarks were also run on the decoder not present in Libav, with badly converted functions for code reading, and that turned out to have some problems because of the encoder used (also not in Libav) that produced nonconforming stream and screwed multi-threaded decoding benchmarks (that one can be seen as both trollish and arrogant—kinda like judging The Beatles performance from an excerpt sung by your not very talented neighbour).

But mostly it was bikeshedding and asking why it was not using old get_bits interface. The answer for the latter is simple—because it was built from horrible macros used in half of the places directly so you should either to make everything follow those macros design or convert the old UPDATE_CACHE(); LAST_SKIP_BITS(); ... CLOSE_READER(); into saner get_bits(); skip_bits(); anyway. And Libav developers decided it’s better to have fully new interface anyway and to make it consistent with bytestream reading while at it.

So why did people who should have nothing to do with it bikeshedded that much? Probably because they know in their hearts that as soon as it hits Libav the work on copying it into FFmpeg starts and sooner or later it will reside in FFmpeg codebase probably along with old get_bits.h with most decoders switched to the new bitstream reader anyway. Why? See the theoretical conversation above. I’d like to know the answer why merges are really done but I guess I’ll get it no sooner than this bitstream reader is accepted into Libav master (i.e. never).

On QuickTime Codecs

May 7th, 2016

The amount of interesting codecs is dangerously low so I’ll probably stop writing about them at all (and that rises a question whether this blog should be kept alive at all).

So, scraping bottom of the barrel I come to QuickTime codecs.

There are two codecs from the standard QuickTime package that are yet to be implemented in opensource: QDesign Music and Apple Pixlet. The former is (obviously) an audio codec with simple tones+noise coding, I hope to document it soon. The latter is an intermediate codec based on wavelets, so it should not be that hard to RE. The main problem is that I don’t know where to find a decoder (and I’m too lazy to search for one actively). It’s said that the only version of QuickTime being able to decode it was on Mac OS X Panther (yes, not just when it was called Mac OS X but also when it was purely PowerPC only). I estimate this codec would be rather simple—on par with SMPTE VC-5 (and probably even without codebooks but rather with generic variable-length codes like in Pear Intermediate Codec and AmateurRes). And PowerPC assembly is not that bad after you get hold of rlwinm instruction, I’ve REd most of AIC from PowerPC binary after all.

And there are some third-party extensions even Compn doesn’t know about like NewTek SpeedHQ or Digital Anarchy Microcosm codec. The former is an ordinary DCT-based intermediate codec any koda can RE, the latter is somewhat funny lossless codec (funny because it uses range coder just to decode bytes and use them in 8- or 16-bit RLE) that is better left to Derek to RE. SheerVideo has been documented long time ago, ZyGo video was just another DiVX, VP3 and Indeo 4 have other decoders etc etc.

Life is boring.

Update: so there is a more modern Pixlet decoder. I’ve looked at it. There’s per-plane wavelet compression, parametrised Rice codes, everything rather trivial. The only interesting things are coding of the zeroeth subband (it’s splitted into first coefficient, top row, left column and all other coefficients coded with top+left prediction) and the fact they have subband header with magic 0xDEADBEEF. Nice touch!

Life is still boring though.

Some Thoughts on Reuniting

April 28th, 2016

Before I move to the point I’d like to give some historical examples based on countries.

Ukraine

Well, as you remember in 1917-1918 there were several Ukrainian republics, most known are Ukrainian People’s Republic, West Ukrainian People’s Republic and Ukrainian Soviet Socialistic Republic. There were some other small states like anarchic republic but they are not relevant here.

So, Ukrainian People’s Republic and West Ukrainian People’s Republic willingly united in 1919 and that day is a national holiday now (later it was obviously occupied by Soviet Russia, Poland, Romania and Czechoslovakia). But why did that union happen? Because people wanted it and there was a dream for the united Ukraine since ages.

Germanies

You should’ve learned about it at school (or witnessed if you’re old enough). Why did the unification happened? Because people from both sides wanted it and the Soviet union could not prevent it any more.

Moldova and Romania

These countries share common history, have the same language and people like the idea of single country. While unification has not happened yet it might happen even in this century.

Chinese Republics

Here the situation is funnier. People’s Republic of China doesn’t recognize Republic of China yet they somehow co-exist and probably in distant future they will be the one again. Why? Because PRC is changing and it’s not what it had been during Chairman ? times.

Korea

Here the situation is even funnier. There are two governments who think they are the only True UpstreamKorean state, it’s just half of it is still occupied. And while there are constant talks about reunification, neither state really wants it. One country has suffered under homebrew Socialism (just look up what ‘juche’ means) for too long so it will take an enormous amount of time and money to make both parts equal (even funnier if you consider that before 1960s North Korea was industrially developed and South Korea was an underdeveloped agrarian region). Germanies got it easier (as a person paying Solidaritätszuschlag I know that). So will the reunion ever happen? I wouldn’t bet on it.


And now, to the our favourite projects.

Time from time somebody outside projects or from FFmpeg side asks about projects reunification. There are talks about it at VDDs. And yet there are no results. Have you noticed that I mentioned no such talks initiated by Libav. Why? Probably because Libav does not want to merge back. And there you have it—reunification cannot happen peacefully because you don’t have majority on both sides wanting it.

And that raises two questions: why FFmpeg wants reunification and why Libav doesn’t want it (or as a single question—what prevents it).

It seems that for some reason not clear to me FFmpeg keeps merging all stuff from Libav (feel free to enlighten me, otherwise FFmpeg developers themselves might forget it and it’ll turn into tradition) and having both projects together will solve two problems: the need for merge and the lack of skilled developers (it’s always the issue).

What does Libav gain from the merging? Relief from constant merges? Unlikely since it’s not being done there. More developers? That’s nice but Libav project seems to be happy as is. Return to the known brand and distributions? See above and here.

Let’s assume the projects decided to play nice out of nowhere and please people who’d want them to reunite. What would happen then? Multiple discussions about development process (that lead to the split in the first place), including but not limited to: reviewing process (relaxed and not applicable to some people or mandatory for any change), code standards (especially formatting), what features to have in the united tree (flat history or merges, one native decoder for certain format or two, use the code snippet like it was done in FFmpeg or in Libav). And on this stage it will all start to fall apart again.

So there you have it: clash of different development ideologies and more benefits for one side than the other. Also it’s rather hard to force people to work on the project they don’t like (and now they can choose at least).

And since this discussion cannot avoid certain names, here it is: I believe that Carl Eugen Hoyos deserves to be the next FFmpeg leader. Obviously my opinion doesn’t matter there and I could not convince anybody at VDD’15 but I firmly believe so. He’s the one with passion for the project, he cares for codec support (even fringe formats), he likes to follow guidelines, he respects Michael and is unlikely to go and ruin what he created. And at VDD he looked kinda like the most responsible adult too, so he can be the project face. Again, this is merely my opinion that won’t change anything.

Sincerely yours, NihAV project developer (it’s still vapourware, thanks for asking).