Archive for February, 2010

It was not the codec you’re looking for

Friday, February 12th, 2010

There is only one thing that may taint the joy of REing yet another codec. It’s when you realize that most of the samples you want to decode are coded with another codecs.

While recent Indeo 5 decoder addition allows playing many files, I found out that I have more samples encoded with Indeo 4. Even though I have Bink Video decoder it looks like I don’t have much samples for it. But there are many other games with custom codecs worth REing.

Yet it’s not that bad as sounds. M$ Video 1, Cinepak, Smacker and Sierra VMD seem to cover most of the samples I have interest in. Luckily for me there are many codecs left to RE for which I have some interest. Another guy had fulfilled his dream of being able to watch movie trailers in QuickTime format, so there’s almost no new work from him.

P.S. After I’d published that “looking for a job” post, I got many proposals, but for some reason they are mostly for USA and some people asking if I’d consider Australia too (BTW, answer is no, it’s too warm place for me and I plainly can’t work in such conditions). Either I want something unrealistic (i.e. job in Europe) or it’s Murphy Law in action.

All codecs roads lead to FFmpeg

Saturday, February 6th, 2010

This is written mainly as a response to some flamewars.

All codecs may be divided into two categories — mature codecs and developing codecs. In the first case we have frozen bitstream format and not so many enhancements to codebase supporting that codec. In the second case we have codec that may change bitstream format and (what is quite important) encoder features.

FFmpeg itself went that way — from highly experimental H.26x encoder and decoder to rather stable set of almost all decoders available around and several encoders. Since there are some coding rules and conventions and existing framework in it, it makes it very convenient place to implement decoders — you can reuse a lot of code optimised for many platforms (so you don’t have to care about DCT speed, for example) and users don’t have to worry about adding new decoding interface and new lines in configure script, it’s all handled inside libavcodec. And “NIH syndrome” also gives a benefit here — you don’t have to worry about additional libraries (and original codec devs will have their codec specs tested as well).
You know the other advantages of this approach too.

In the same time those features make FFmpeg a bad place for having still evolving encoders for they are not likely to fit into existing framework so easy. The best this tension could be viewed in our interaction with certain encoder. They constantly modify this encoder, so existing FFmpeg options and presets are not good for them and it’s hard to tell how well it will work. Now let’s see what happens if x264 code will be merged into FFmpeg. It will put a rather harsh constraint on x264 developers because it’s hard to tell what change breaks other codecs (changes behaviour, whatever) or vice versa. The same applies to codec-specific features (like muxer using some encoder information, think H.264+MPEG-TS).

On the other hand, it is much easier to incorporate into FFmpeg an encoder not changing so much — some compromises should be made on common interface, some parts replaced with standard FFmpeg routines and voila!

I think that’s the reason we have a lot of decoders and not so many lossy encoders (especially not so many lossy encoders with good quality) in last N years. And it’s the reason why encoder should be originated as standalone projects and merged when they are stable. I’d also like to note that FFmpeg has standing issues with providing better framework for non-H.261 based codecs and descendants (where is codec-independent rate control and motion estimation?), maybe this affected Snow development as well. Anyway, let’s live and see how all these things will be resolved.