Archive for June, 2017

NihAV — Progress Report

Sunday, June 25th, 2017

Obviously it moves very slowly: I spend most of my time on work, sleep, cooking and travelling around. Plus it was too hot to think or do anything productive.

Anyway, I’ve completed IMC/IAC decoder for NihAV. In case you’ve forgotten or didn’t care to find out at all, the names stand for Intel Music Coder and Indeo Audio software with IAC being slightly upgraded version of IMC that allows stereo and has tables calculated for every supported sample rate instead of the set of them precalculated for 22kHz. And despite what you might think it is rather complex audio codec that took a route of D*lby AC-3, G.722.1/RealAudio Cooker and CELT—parametric bit allocation codecs. It’s the kind of audio codecs that I dislike less than speech codecs but more than the rest because they have large and complex function that calculates how many bits/values should be spent on each individual coefficient or subband. In IMC/IAC case it gets even worse since the codec uses floating point numbers so the results are somewhat unstable between implementations and platforms (a bit more on that later). Oh, and this codec has I- and P-frames since some blocks are coded as independent and others are coded using information from the previous block.

Rust does not have much to do with C so you cannot simply copy-paste code and expect it to work and it’s against the principles of the project anyway. Side note: the only annoying Rust feature so far is array initialisation, I’d like to be able to fill array in a loop before using it without initialising array contents to some default value (which I can’t do for some types) or resorting to mem::uninitialized() and ptr::write(). Anyway, I had to implement my own version of the code so it’s structured a bit differently, has different names, uses bitstream reader in MSB16LE mode instead of block swapping and decodes most files I could test without errors unlike libavcodec—so it’s NIH all the way!

I wasted time mostly on validating my code against the binary specifications so this version actually decodes most files as intended while libavcodec fails to do that. To describe the problem briefly, it all comes from the same place: the codec first produces bit allocation for all bits still available then determines how to read flags for skipping coefficients in some bands, reads those flags and adjusts bit allocation for the number of bits freed by this operation; the problem is that bit allocation may go wrong and in result skip flags take more bits than the coefficients that would be coded otherwise and decoder would fail to adjust bit allocation for that case (it’s not supposed to do that in the specification) and will read more bits than the block contains. For the thirty-something IMC and IAC in AVI samples only one fails now for me because in bit allocation the wrong band gets selected for coefficient length decreasing. And the reason is the difference in the fourth or fifth digit after the decimal point in one array of constants that makes the wrong value minimum (and thus selected for coefficients length decreasing). Since it takes several minutes with gdb+mplayer2 to get information at this point (about at 10-second position in 14-second audio) I decided not to dig further.

Also I had to write other pieces of code like split-radix FFT, byte writer and WAV dumper that accepts audio packets and writes them with the provided ByteWriter.

P.S. Nanobenchmarks ahoy: decoding the longest IMC stream that I had (a bit more than two minutes) takes 0.124s with avconv and 0.09s with nihav-tool. Actual decoding functions take about the same time though Rust implementation is still faster by couple percents and my FFT implementation is slower (but on the other hoof it’s called for every frame since it decodes that file without errors).

P.P.S. So next is Indeo 4/5 with all wonderful features like scalable decoding, B-frames and transparency (that reminds me that Libav and ScummVM had a competition who would be the last to implement proper transparency support for Indeo 4, now they both might win). And then I’d probably go back to implementing the features I wanted: being able to tell the demuxer to discard and don’t demux certain streams, better streams reporting from the demuxer, seeking and decoder reset, frame reordering functionality, maybe WAV support too. And then maybe back to decoders. I want to have several codec families fully implemented, like RAD (Smacker, Bink and Bink2), Duck/On2 (TM1, TM-RT, TM2, TM2X, TM VP3, VP4, VP5, AVC, VP6 and VP7) and RealMedia (again). But I’m not in a hurry.

P.P.P.S. I’m not going to publish all source code but bits of it may be either posted when relevant or leaked to rust-av, its developer(s) has(have) shown some interest, so enquire there.

NihAV — A New Decoder

Saturday, June 10th, 2017

After a lot of procrastination I’ve finally more or less completed decoder for I.263 (Intel version of H.263) in NihAV.

It can decode I-, P- and PB-frames quite fine (though B-frames have some artefacts) and deblock them too (except B-frames because I’m too lazy for that). Let’s have a look at the overall structure of the decoder.

Obviously I’ve tried to make it modular but not exceeding the needs of H.263 decoder (i.e. I’m not going to extend the code to make it work with MPEG-2 part 2 and similar though some code might be reused), so it’s been split into several modules. Here’s a walk on all modules and their functionality review.
(more…)

NihAV — Format Detection

Sunday, June 4th, 2017

So I’ve decided to implement container format detection for NihAV. This is a work of progress and I’m pretty sure I’ll change it later but it should do for now.

The main principles are quite simple: formats are detected by extension and by the contents, so there’s a score for it:

pub enum DetectionScore {
    No,
    ExtensionMatches,
    MagicMatches,
}

I don’t see why some format should not be detected properly if demuxer for it is disabled or not implemented at all. So in NihAV there’s a specific detect module that offers just one function:

pub fn detect_format(name: &str, src: &mut ByteReader) -> Option< (&'static str, DetectionScore)>;

It takes input filename and source stream reader and then tries to determine whether some format matches and returns format name and detection score on success (or nothing otherwise). I might add probing individual format later if I feel like it.

Before I explain how detection works let me quote the source of the detection array (in hope that it will explain a lot by itself):

const DETECTORS: &[DetectConditions] = &[
    DetectConditions {
        demux_name: "avi",
        extensions: ".avi",
        conditions: &[CheckItem{offs: 0,
                                cond: &CC::Or(&CC::Str(b"RIFF"),
                                              &CC::Str(b"ON2 ")) },
                      CheckItem{offs: 8,
                                cond: &CC::Or(&CC::Or(&CC::Str(b"AVI LIST"),
                                                      &CC::Str(b"AVIXLIST")),
                                              &CC::Str(b"ON2fLIST")) },
                     ]
    },
    DetectConditions {
        demux_name: "gdv",
        extensions: ".gdv",
        conditions: &[CheckItem{offs: 0,
                                cond: &CC::Eq(Arg::U32LE(0x29111994))}],
    },
];

So what is the way to detect format? First the name is matched to see whether one of the listed extensions fits, then the file contents are checked for markers inside. These checks are descriptions like “check that at offset X there’s data of type <type> that (equals/less than/greater than) Y”. Also you can specify several alternative checks for the same offset and there’s range check condition too.

This way I can describe most sane formats, like “if at offset 1024 you have tag M.K. then it’s ProTracker module” or “if it starts with BM and 16-bit LE value here is less than this and here it’s in range 1-16 then this must be BMP”.

One might wonder how well it would work on MP3s renamed to “.acm” (IIRC one game did that). I’ll reveal the secret: it won’t work at all. Dealing with raw streams is actually beside format detector because it is raw stream and not a container format. You can create raw stream demuxer, then try all possible chunkers to see which one fit but that is stuff for the upper layer (maybe it will be implemented there inside the input stream handling function eventually). NihAV is not a place for automagic things.

#chemicalexperiments — Cheese Cakes

Saturday, June 3rd, 2017

This is rather controversial topic because different countries recognize different kinds of cheese let alone what can be made out of it so what bears the name “cheese cake/pie” in one country might be not recognized as such in another.

So, cheese. Depending on country you have either one or two categories of cheese recognized: so called cottage cheese (or Quark/kvarg in Germanic language countries) and the rest of hard or semi-hard products made of milk. There’s also Italy where some cheeses (like mozzarella, provolone or scamorza) are considered to be pasta but that’s Italy and it doesn’t deserve second mention in this post.

Cottage cheese can be also divided into two categories: grainy and homogeneous mass. The first kind is more common in Eastern Europe (I’ve seen it in Ukraine, Czechia and Hungary for example; it can be also found in Germany but only in rather small packaging and runny), the second kind is more common in Germany.

The conventional hard or semi-hard cheese can be made into a pie usually by grating it, mixing with cream and eggs or sour cream and baking.

And of course there’s USA where what they call cheesecake is made (if you believe Wickedpedia) from either cream cheese (i.e. product where cheese-making process was terminated halfway) or ricotta (made from whey instead of milk, so not a cheese either).

Now, let’s look at real cheese cakes/pies I’ve encountered so far or even made myself:

  • Ukraine — there’s a traditional Ukrainian dish сирники, patties made from grainy cottage cheese mixed with semolina or millet and flour and fried. Those I like and approve;
  • Germany — there are two similar variation of what is called käsekuchen(literally cheese cake). In both cases it’s mostly Quark (homogeneous cottage cheese) mixed with semolina and baked, in one case they’re also made more cake-like by mixing milk and starch and adding pieces of tangerine. This variation I bake myself time from time, it goes even better with a bit of sour cream (Schmand) or gräddfil on top;
  • Switzerland — there they have Chäschueche(essentially käsekuchen pronounced in Swiss German) which is obviously nothing like its German counterpart. Instead we have a small tart made from local chäs(semi-hard semi-sticky Swiss cheese with stinky rind) that’s rather savoury instead of sweet. I’ve tried them once, found them edible but not something spectacular;
  • Sweden — this country has ostkaka(literally cheese cake) which can be described as an interesting cheese that was too good to wait for it so it was baked instead of ripening all the way. Obviously I buy it when possible and eat with lingon jam, there’s especially good version available in Jul season;
  • Sweden — there’s not enough of it! Sweden also offers västerbottensostpaj(or simply västerbottenpaj) which is a quiche-like pie with filling made from the best cheese in the world (from Burträsk obviously) combined with eggs and cream (I should try gräddfil instead) and baked. I enjoy them both in Sweden and sometimes bake it myself (when I have The Cheese) because it’s worth it.

And an the end several fun facts:

  • German name for cottage cheese (Quark) is most likely the one that got into Finnegans Wake, from which it was borrowed later for certain physical term (though physicists playing stringed models refuse to acknowledge that concept);
  • in Czechia grainy cottage cheese (tvaroh) is sold in pressed triangles, if you wrap a cabbage leaf around it you can troll Japanophiles that it’s local onigiri (like I did once);
  • in Sweden they actually have different names for grainy cottage cheese (called “cottage cheese”) and homogeneous one (called “kvarg”);
  • and in Ukraine it’s all called simply “cheese” (maybe because hard cheese was not common in Ukraine, only hard cheese-like product sold in Soviet times);
  • another fun fact from Ukraine—cottage cheese sold on markets by individuals varies in units depending on region: in some places it’s sold by weight, in some places it’s sold by volume (using standard half-litre jars for example), in some places it’s sold by saucers (i.e. how much of it you can put on a saucer) and in other places it’s sold by amount yielded from 3 litres of milk.

Okay, back to doing nothing.

NihAV — Concept and Principles

Thursday, June 1st, 2017

Looks like I’m going to repeat the same things over and over in every NihAV-related post so I’d better sum them up and whenif people ask why some decision was made like that I can point them here.

So, let’s start with what NihAV IS. NihAV is the project started by me and me alone with the following goals:

  • design multimedia framework from the ground in the way I see fit (hence the NIH in the name);
  • do that without any burden of legacy (should be obvious why);
  • implement real working code to both test the concepts and to keep me interested in continuing the project (it gets boring pretty quickly when you design, write code and it still does not do anything visible at all);
  • ignore bullshit cases like interlaced H.264 (the project is written by me and for myself and I’ll do fine without it, thank you very much);
  • let me understand Rust better (it’s not that important but a nice bonus nevertheless).

Now what NihAV is NOT and is NOT going to be:

  • a full-stack multimedia framework (i.e. which lacks only handling user input and audio/video output to become a media player too, more about it below);
  • transcoder for all your needs (first, I hardly care about my own needs; second, transcoder belongs elsewhere);
  • supporting things just because they’re standard (you can leave your broadcasting shit to yourself, including but not limited to MXF, interlacing and private streams in MPEG-TS);
  • designed with the most convenient way of usage for the end user (e.g. in frame management I already output dummy frames that merely signal there was no change from the previous frame; also frame reordering will be implemented outside decoders);
  • have other FFeatures just because some other project has them;
  • depend on many other crates (that’s the way of NIH!);
  • have hacks to support some very special cases (I’m not going to be paid for e.g. fixing AVI demuxer to support some file produced by a broken AVI writer anyway).

What it might become is a foundation for higher level multimedia data management which in turn can be either a library for building transcoder/player or just used directly in such tools. IMO libav* has suffered exactly from the features that should be kept in transcoder creeping into the libraries, the whole libavdevice is an example of that. Obviously it takes some burden off library users (including transcoding tool developers) but IMO library should be rather finished piece with clearly defined functionality, not a collection of code snippets developers decided to reuse or share with the world. Just build another layer (not wrapper, functional layer!) on top of it.

For similar reasons I’m not going to hide serious functionality in utility code or duplicate it in codecs. In NihAV frames will be output in the same order as received and reordering for the display will be done in specific frame reorderer (if needed), same for filling missing timestamps; dummy frame that tells just to repeat the previous frame is used there in GDV decoder already:

    let mut frm = NAFrame::new_from_pkt(pkt, self.info.clone(), NABufferType::None);
    frm.set_keyframe(false);
    frm.set_frame_type(FrameType::Skip);

Some things do not belong to NihAV because they are either too low-level (like protocols) or too high-level (subtitles rendering, stream handling for e.g. transcoding or playback, playlist support). Some of them deserve to be made into separate library(ies) later, others should be implemented by the end user. Again, IMO libav* suffers from exactly this mix of low- and medium-level stuff that feels too low-level and not low-level enough at the same time (just look how much code those ffmpeg or avconv tools have). Same goes for hardware-accelerated decoding where the library should just demux frame data and parse its headers, the rest is up to hwaccel chain in the end application, but instead lazy users prefer libavcodec to try all possible hwaccels on the frame and fall back to multithreaded software decoding automatically if required. And preferably all other processing in e.g. libavfilter should be done using custom hwaccel format too. Since I’m all for this approach (…NOT), NihAV will recognize that the frame uses some hwaccel format and that’s all. It’s up to the upper layer to build custom processing chain.

I hope the domain for NihAV is clear: it will take ByteIO input, demux data using it (packets or elementary stream chunks—if you want them in packet format then use a parser), optionally fill timestamp information, decode frames, reorder them in display order if requested, similar approach for writing data. Anything else will belong to other crates (and they might appear in the future too). But for now this is enough for me.

P.S. If I wanted to have multimedia player I’d write one that can take MP4/FLAC/WV for input and decode AAC/FLAC/WavPack plus feed H.264 to VAAPI. I know my hardware and my content, others can write their own players.

P.P.S. If you want multimedia framework written in Rust for wide userbase just wait until rust-av is ready.