The first time I encountered his work was (then unknowingly to me) when I found out that INSTALL.EXE
from Sierra quest was packed with LZEXE
. And a couple of years later while looking at all the packages some Mandrake Linux distribution I’d encountered a package called ffmpeg
that could transcode various newer formats (it was also the time when I got a first DiXV 3 movie rip and the only players I had could play MPEG-1 at best). And I’d kept encountering his name in random places…
I think there are enough articles written about his work in general as they give enough material to prove that he’s a rare genius whose works changed different fields of computing. But before I go to describing his role in FFmpeg
I want to mention that it seems that he mostly follows some path and most of his projects are related to each other (mostly compression and emulation) and smaller projects turn out to be parts of something bigger that may change the industry (e.g. in 2001 he won International Obfuscated C Code Content with a small C compiler, which gave way to TCC
, which gave way to QEMU
, which along with KQEMU essentially created the modern opensource hypervisor landscape).
Now let’s move to a project of his that takes place somewhere between 2000 IOCCC winning entry that calculates FFT and image compression experiments (BPG and that 2018 IOCCC winning entry).
As the repository history says, Fabrice created FFmpeg
in late 2000 but the actual work on it started in the second half 2001. Even if it was spotted and adopted by MPlayer
almost immediately, Fabrice remained the principal developer until October 2001 when other developers took over (we’ll talk about them in the upcoming posts). I also suspect that Juanjo (or pulento
) who was active until 2002 may be yet another pseudonym of his (remember, he started the project as Gerard Lantau probably in order not to get sued). He distanced himself from the project in 2003, having some commits introducing DVD subtitles in 2005 and a single random patch in 2007.
This project of his has changed the multimedia landscape by both making a bold move to create something that supported state of the art (at the time) video formats and creating a common place for opensource developers to share their work. Before that each player had implemented (or copied) own decoders for various formats (also is there anybody not named Mike who remembers xine-codec-devel
mailing list?), afterwards most of them started to rely on libavcodec
for their decoding needs.
Having said that, I should move to discussing the less stellar aspects of his work and style. As mentioned above, Fabrice won International Obfuscated C Code Contest three times and you need a certain mindset to create code like that—which creeps into your other work. So despite his implementation of FFmpeg
was fast-working, it was not very nice to debug or refactor, especially if you’re not Fabrice (and MPlayer
developers made it ever worse). The other thing is that he’s not good at selecting successors to run his projects. You may think that FFmpeg
–libav
split was an exception but TCC
community seems to underwent the same route (it’s just fewer people care about it) so I wonder more how QEMU
escaped this fate (Update: apparently it did not but it went without such drama). In either case, the good he’s done to the world outweighs the deficiencies and I mention them mostly to remind people that no person is perfect.
The terrible organization, duplication, and generally bad quality of FFmpeg’s codebase is why I created OVIA.
The main difference is, I only support lossless codecs.
From a quick glance at it, you intend to support baseline JPEG and AVC which are lossy. So if you change the description to “only codecs I care about” it would fit NihAV description as well.
The splendid organization, longevity, robustness and limitless functionality of FFmpeg codebase are reasons why its still actively developed. Other projects are very rudimentary, limited and toy-like products so I do not develop on them.
I fear your friend Nicolas George won’t agree with you on neither of those point. IMO it’s more of the “too big to die” status.
And of course you’re free to not develop for any project, especially if you don’t care about “I contributed to the project before it became famous” status.
For the all humanity that remained on planet it is imperative to work on improving products and extending functionality and quality than on taking fights and doing huge fragmentation of projects doing very similar solutions, thus wasting precious resources more or less in vain on the long term span as people came and go and less popular project die too early.
You can never be sure the approach you take is the only right one even if the project is large. That’s why having an alternative or two is good even if it wastes resources: in the long run a different approach may work better. To give an example, from your point of view Ljudevit Gaj wasted resources and you should use a completely different script in your country.
Example is flawed, Gaj was never mainstream of that time – it was more influence/corruption driven, but I see point you are trying to make.
> and MPlayer developers made it ever worse
In fairness I would claim that quite a few also worked on making it better! 🙂
And any project of significant size is having its code quality issues, by the time I worked on it, I would claim it was one of the better ones despite all the flaws.
True, for example Diego came from MPlayer as well.
But e.g.
mencoder.c
is not an inspiring example and considering that initial FFmpeg from 2001 looked much nicer than in 2004 I think you know which project is to blame…QEMU wasn’t spared from the forking syndrome. It was forked as qemu-kvm because Fabrice didn’t want to support virtualization (KQEMU still used emulation for supervisor mode). On the emulation side Linaro and Xilinx had a substantial number of patches for emulation of OMAP3.
However IBM (Anthony Liguori who was a KVM developer and a QEMU committer), Red Hat and Linaro independently decided not to fork. IBM and Red Hats steered QEMU more towards a model of maintainer subtrees instead of adding people with commit access, and Linaro joined a couple years later. Xilinx is not a huge contributor upstream but they still use it a lot.
Ultimately qemu-kvm’s functionality was merged back thanks to work by Red Hat and Siemens, and Linaro is probably #1 contributor to QEMU these days. The difference probably was that QEMU’s forks were not hostile, as they constantly merged back from upstream.
Thank you very much for the story. Luca told me that actually there was a split in QEMU related to KVM (yet it was rather civil) but I didn’t know the details.