Common design principles
I’d been participating in FFmpeg and then Libav development for about ten years and I’ve touched many parts of its codebase except for libavfilter
and libavresample
, so I know what I dislike in its design.
Enumerations. Maybe people like them but I think it’s much better to have list of string identifiers instead. You still specify codec or format or protocol by name in command line, why should code have that bulky and incompatible enumeration? It would be more convenient for library user to use string identifier — you try to find format handler for a given name and if you don’t have it or its support is disabled then no luck (of course VideoLAN prefers enums but that’s their problem).
Large pointless structures. AVCodecContext
and AVFrame
are good examples of that (especially the old versions). They lug around many members that are applicable only to very limited subset of video codecs and nothing else. A much better approach IMO would be to have substructures with minimal information needed for all audio/video/subtitle data (both in frame and context) and the rest is put into dictionary (maybe as subobjects, like motion information or rate control information structures).
API variations. Current approach is to shoehorn everything into specific structure. My opinion is that public functions should take as flexible (or simple) input as possible and do the same with output. For example, why have avcodec_decode_video2()
, avcodec_decode_audio4()
and avcodec_decode_subtitle2()
if single function is enough? You feed input bytes and you obtain output bytes — no matter what you actually do (encode, decode, filter or pass through). Anything optional should be passed as optional — in a dictionary for example.
Various stuff. Parsing, probing, timestamp handling. All these things need to be reinvented because it’s hard to imagine them being much worse than they are or were a couple years ago.
I’d also like to have some small building blocks for codecs. In libavcodec
many video decoders were forced to be built around MpegEncContext and noone likes that structure (except one guy who even named a video player after it but then again he doesn’t want to disclose his real name…). I prefer to have more independent decoders reusing the same methods somehow (e.g. this codec needs this frame management, this motion compensation). How to implement it, templating and macros or function pointers for codec-specific functions (like block decoding) is yet to be conceived.boost::codec::video::block_decoder
To be continued eventually…
memcmp() is not murder =)