How the codecs should emerge (hint: without .ebuilds)

So it has come to this, some events and discussions made me write this post.

How I imagine the perfect process for new codecs? It’s rather simple model: you have some places where ideas and enthusiasts swarm and from their work and selecting best ideas new candidate codec is born.

There are such places for all codec types: audio enthusiasts can find testers at Hydrogenaudio, video enthusiasts can talk at Doom9, general and image compression people seem to be present at encode.ru. In first approximation it works as expected—people propose ideas, test new compression programs and report benchmarks, suggest improvements. What can be wrong there? Just one thing: people making software incompatible with anything else (custom containers/archive formats) and trying to push it on everybody. After you invent some format make sure it works in some standard environment (for compressors it’s usually single file compression mode, .tar.xz seems to be more popular than .7z even if they use the same LZMA algorithm; for codecs it should be the standard container—even Matroska would do). And document the format too—properly instead of usual “bug off” level.

There are standardised codecs that undergo similar process: various companies or researchers submit their work, a base for a new standard is chosen, new proposals try to improve it. And then companies start to push their patented shit there and that’s where the system goes wrong (QMF in MPEG Audio Layer III anyone?). It’s not better when some company tries to push its product as a standard without any evaluation (and thus we get wonderful line of SMPTE VC-x codecs for instance).

And there’s OggXiph. This is again a community that designs codecs mostly because they can and pushes them mostly because they’re Free™ and OpenSource™ and they mostly suck otherwise: Ogg format is for streaming not good for anything, most people still don’t know that it’s Ogg/FLAC because it was developed outside (and has horrible raw stream format), Speex has no readable specification and easier understood with disassembling the library rather than reading source code, Theora is an outdated enterprise grade code, Opus has its issues (but it’s rather good, one cannot deny that), Daala will probably never happen.

And what do I see in recent news? Alliance for Open Media plans to release first draft of their codec soon and it is:

  • hosted on baidusource.com;
  • for now just libvpx with some names changes;
  • everything else about it screams Baidu too.

It if looks like Duck, produces codecs like Duck and has the same source code as Duck, then it probably is DuckOn2Baidu.

At least in the old times there was some competition of ideas in codecs so one could choose between different codecs giving good results—and in some cases they were available for various ecosystems too (e.g. Indeo was present in AVI and MOV, ClearVideo managed to get into AVI, MOV and RM). Now it’s just foam of lossless codecs that even their authors forget about next year and one or two companies pushing their stuff on everybody. And that makes me sad.

2 Responses to “How the codecs should emerge (hint: without .ebuilds)”

  1. 1 says:

    Tip: repeating same nonsense about Ogg doesn’t make it true.

  2. Kostya says:

    Tip: making statements with such claims and no proofs doesn’t make them convincing at all.

    I know the author of the linked article, I know his work and that he had first-hand experience with the mentioned formats (including writing demuxers for them) and I have some information from other sources that confirms his claims. And yet I should believe some anonymous guy who gave no proofs? I’ll pass, thanks.