A Bit About Legendary German Animation

August 5th, 2017

If you talk about German films as a foreigner you might know some good ones like influential Fritz Lang movies (for Frau im Mond they’ve hired Hermann Oberth himself and as the result their depiction of space travel looks much more realistic than modern Hollywood flics) or Gojko Miti? adventure features (the Eastern Westerns). But if you’re not from Germany, what German cartoons do you know? Looks like the only German cartoons that got some widespread action are those from Dingo Pictures.

Dingo Pictures is a company located in Taunus that has produced about thirty Hess(l)ich cartoons in the second half of 1990s. Some of those were completely unique and some were ripped off by D*sney and Don Bluth earlier.

Dingo Pictures has their own unique and easily recognizable style. But before I explain it, here are the eponymous animals in one of their cartoons:

So, Dingo Pictures was a pioneer, combining computer drawn animation with 2D drawn background (watercolours no less!). Also like the anime father, Osamu Tezuka, the company had a cast of actors always appearing in every cartoon.

For example:
The Bear (he often changes scenes and complains about everything)

Goldie (named so after the Austrian book Bambi) and Wuschel (the squirrel), Ringo the Hare is not pictured here

And here’s the star of all Dingo Pictures cartoons, the one and only Wabuu:
Wabuu!

In case you didn’t know, Wabuu was so popular that he had his own original cartoon, title song (that can be heard in several other cartoons too) and even audio books! Even now the DVD with his own cartoon costs at least twenty Euro on Amazon and about ten Euro used (for comparison, you can buy almost any other used DVD with Dingo Pictures cartoon for one eurocent).

Anyway, we were talking about the style. It’s hard to express in words what makes Dingo Pictures cartoons so charming but I think phrases “record-mending animation quality”, “copy-pasting everything”, “reusing the same scenes in other cartoons”, “more padding that Star Trek The Motionless Picture” and “dull voice acting” would do.

Here are some shots from one of their longer films, King of the Animals (or Lion King for short), don’t mind the quality, I tried to be lazy:

Title card. One of the best ones, honestly.

Every animal is uniquely redrawn.

The titular king.

Nice backgrounds.

One of moles (as you can guess from its look this mole is Italian).

The story is very typical: the lion rules the jungle full with monkeys, hippos, crocodiles and vultures (and with hares, squirrels and bears—the bear picture above is from this cartoon). One day a monkey finds diamonds but they decide not to mine them in fear of humans coming. With the birth of his son, the lion king loses interest in ruling and the black panther seizes the power with cheating and false promises and exiles the king. Later, with the help of snake, vultures and bear the panther is defeated. If you think you’ve heard this story elsewhere, don’t worry—it has unexpected twists in it.

And in the end we have scenes like this:
The vultures asked for a computer with phone and modem for their help.

The Black Panther is defeated!

BTW the snake enjoys reading books and quotes Shakespeare, Goethe, Karl Marx and Gorbachov. If you find this strange how could you get past the fact the panther’s name is Bocassa?

How one can not enjoy masterpieces like this! Oh, and every time I hear sirens I remember the duck from Animal Football, that’s how much their art has touched me.

There’s a sequel to it, simply called King of the Animals The Second Part but it’s of lesser quality IMO.

The other noteworthy cartoons are Aladdin(the genie there is a famous actor who is not Robin Williams), Animals Football(there they’ve copy-pasted all their animal characters and then some), Bremen Musicians(it has live narrator filmed, not just animation), The Case for Mouse Police(it simply needs to be seen to be believed) and of course Wabuu.

P.S. For some reason DVDs are distributed by P*wer Channel GmbH and don’t mention the original creator anywhere except in the video. They are that modest.

P.P.S. Honestly, I don’t think I’ve heard about any other German cartoons. But these cartoons have reviews in BaidUTube channels of people from countries like Canada and Sweden (the latter is in Swedish of course, actually Wabuu song sounds even better in Swedish than in German).

#chemicalexperiments — Lasagne

August 5th, 2017

Let me start with a bit of history.

Normal don’t care much how to eat their pasta—they simply cook it, add whatever they have (even mayo probably or nothing at all) and eat it. Italians are different, they select pasta sauce first and then decide what pasta will go fine with it. In case of meat sauce (or ragù as locals call it) Italians considered that wide plain pasta would go best with it for some reason. So they competed who can use the wider noodles and the guy who simply took the whole plates won. But it was a bit inconvenient to cook them and then mix with sauce so they’ve switched to oven baking the whole thing in sauce instead. And that’s how lasagne was born (probably; Italians have a completely different story to tell but they always do).

Since I’d better avoid meat entirely, I decided to cook my own version with various components (in several tries too) and here’s my short summary:

  • it’s better to use thick sauces;
  • tomato sauce is a definite must, it adds flavour;
  • cheese sauce is good mostly for the lowest layer (to lay lasagne plates on it) and for pouring on top;
  • ricotta and Quark make fine layers too, you can even mix them with some vegetables;
  • sliced boiled eggs would make a nice addition to a layer with tomato sauce;
  • mozzarella is better avoided since it will result in hard chewy chunks contrasting with the texture of the rest of the dish.

Overall, it’s nice dish, would bake again.

Rust: Optimising Decoder Experience

August 3rd, 2017

Okay, I’ve made some changes so hopefully the server will withstand the curiosity of more than two people if it will go like the last time.

So, after implementing Indeo 4/5 decoders for NihAV I nano-benchmarked it and my decoder was about twice as slow compared to libavcodec. And since neither has SIMD optimisations they should be good enough to compare.

The tested file was 00186002.avi — Indeo 4 sample with scalability feature(i.e. luma is split into four bands and uses Haar wavelet to compose the output plane) and duration over ten minutes. The results I got will be given in Linux perf sample counts as those should be representative enough.

avconv — 13.4 seconds, 10K cycles. About 24% spent in luma plane recombination (with Haar wavelet), about 40% of time is taken by bitstream decoding and the rest is mostly transforms and motion compensation.

nihav-tool — 31.6 seconds, 20K cycles. 30% spend in luma plane recombination, 48% of time is taken by bitstream decoding, 11% is for motion compensation and the rest is mostly transforms. Or in samples: recombination — 9900 (against 3300 in libavcodec), bitstream decoding (dirty estimate, it includes some DSP functions inlined) — 15800 against
5600. Motion compensation — 3500 against 1700. Transforms — 1300 against 1500 (they are not equivalent though, my code only transforms the block and output costs are hidden in bitstream decoding). Overall, my code is consistently worse. Is there any way to optimise it a bit?
Read the rest of this entry »

Rust: Not So Great for Codec Implementing

July 31st, 2017

Disclaimer: obviously it’s my opinion, feel free to prove me wrong or just ignore.

Now I should qualify for zoidberg (slang name for lowly programmer in Rust who lives somewhere in a dumpster and who is also completely ignored—perfect definition for me) I want to express my thoughts about programming experience with Rust. After all, NihAV was restarted to find out how modern languages fare for my favourite task and there was about one language that was promising enough. So here’s a short rant about the aspects of this programming language that I found good and not so good.

Good things

  • Modern language features: standard library containers, generics, units and their visibility etc etc. And at least looks like Rust won’t degrade into metaprogramming language any time soon (that’s left for upcoming Rust+=1 programming language);
  • Reasonable encapsulation: I mean both (sub)modules organisation and the fact that functions can be implemented just for some structure;
  • Powerful enums that can act both as plain C set of values and also as tagged objects, e.g. the standard Result enum has two values—Ok(result) and Err(error) where both result and error are two different user-defined types, so returned value can contain either while being the same type (Result);
  • More helpful error messages (e.g. it tries to suggest a correction for mistyped variable name or explains an error a bit more detailed). Sure, Real Programmers™ don’t need that but it’s still nice;
  • No need for dependency resolving: you can have stuff in one module referencing stuff in another module and vice versa at the same time, same for no need
  • Traits (standard interfaces for objects) and the fact that operations are implemented as specific traits (i.e. if you need to have a + b with your custom object you can implement std::ops::Add for it and it will work). Also it’s nice to extend functionality of some object by making an implementation for some trait: e.g. my bitstream reader is defined in one place but in another module I made another trait for it for reading codebooks so I can invoke let val = bitread.read_codebook(&cb)?; later.

Unfortunately, it’s not all rosy and peachy, Rust has some things that irritate me. Some of them are following from the advantages (i.e. you pay for many features with compilation time) and other are coming from language design or implementation complexity.

Irritating things that can probably be fixed

  • Compilation time is too large IMO. While the similar code in Libav is recompiled in less than a second, NihAV (test configuration) is built in about ten seconds. And any time above five seconds is irritating to wait. I understand why it is so and I hope it will be improved in the future but for now it’s irritating;
  • And, on the similar note, benchmarks. While overall built-in testing capabilities in Rust are good (file it under good things too), the fact that benchmarking is available only for limbo nightly Rust is annoying;
  • No control over allocation. On one hoof I like that I can not worry about it, on the other hoof I’d like to have an ability to handle it.
  • Poor primitive types functionality. If you claim that Rust is systems programming language then you should care more about primitive types than just relying on as keyword. If you care about systems programming and safety you’d have at least one or two functions to convert type into a smaller one (e.g. i16/u16 -> u8) and/or check whether the result fits. That’s one of the main annoyances when writing codecs: you often have to convert result into byte with range clipping;
  • Macros system is lacking. It’s great for code but if you want to use macros to have more compact data representation—tough luck. For example, in Indeo3 codebooks have sequences like (a,b), (-a,-b), (b,a), (-b,-a) which would be nice to shorten with a macro. But the best solution I saw in Rust was to declare whole array in a macro using token tree manipulation for proper submacro expansion. And I fear it might be the similar story with implementing motion compensation functions where macros are used generate required functions for specific block sizes and operations (simple put or average). I’ve managed to work it around a bit in one case with lambdas but it might not work so well for more complex motion compensation functions;
  • Also the tuple assignments. I’d like to be able to assign multiple variables from a tuple but it’s not possible now. And maybe it would be nice to be able to declare several variables with one let;
  • There are many cases where compiler could do the stuff automatically. For example, I can’t take a pointer to const but if I declare another const as a pointer to the first one it works fine. In my opinion compiler should be able to generate an intermediate second constant (if needed) by itself. Same for function calling—why does bitread.seek(bitread.tell() - 42); fail borrow check while let pos = bitread.tell() - 42; bitread.seek(pos); doesn’t?
  • Borrow checker and arrays. Oh, borrow checker and arrays.

This is probably the main showstopper for implementing complex video codecs in Rust effectively. Rust is anti-FORTRAN in a sense that FORTRAN was all about arrays and could operate arrays safely while Rust safely prevents you from operating arrays.

Video codecs usually operate on planes and there you’d like to operate with different chunks of the frame buffer (or plane) at the same time. Rust does not allow you to mutably borrow parts of the same array even when it should be completely safe like let mut a = &mut arr[0..pivot]; let mut b = &mut arr[pivot..];. Don’t tell me about ChunksMut, it does not allow you to work with them both simultaneously. And don’t tell me about Bytes crate—it should not be a separate crate, it should be a core language functionality. In result I have to resort to using indices inside frame buffer and Rc<RefCell<...>> for frames themselves. And only dream about being able to invoke mem::swap(&mut arr[idx1], &arr[idx2]);.

Update: so there’s slice::split_at_mut() which does some of the things I want, thanks Tomas for pointing it out.

And it gets even more annoying when I try to initialise an array of codebooks for further user. The codebook structure does not implement Clone because there’s no good reason for it to be cloned or copied around, but when I initialise an array of them I cannot simply declare it and fill the contents in a loop, I have to resort to unsafe { arr = mem::uninitialized(); for i in 0..arr.len() { ptr::write(&arr[i], Codebook::new(...); } }. I know that if there’s an error creating new element compiler won’t be able to ensure that it drops only already initialised elements but it’s still a problem for compiler not being smart enough yet. Certain somebody had an idea of using generator to initialise arrays but I’m not sure even that will be implemented any time soon.

And speaking about cloning, why does compiler refuse to generate Clone trait for a structure that has a pointer to function?

And that’s why C is still the best language for systems programming—it still lets you to do what you mean (the problem is that most programmers don’t really know what they mean) without many magical incantations. Sure, it’s very good to have many common errors eliminated by design but when you can’t do basic things in a simple way then what it is good for?

Annoying things that cannot be fixed

  • type keyword. Since it’s a keyword it can’t be used as a variable name and many objects have type, you know. And it’s not always reasonable to give a longer name or rewrite using enum. Similar story with ref but I hardly ever need it for a variable name and ref_<something> works even better. Still, it would be better if language designers picked typedef instead of type;
  • Not being able to combine if let with some other condition (those nested conditions tend to accumulate rather fast);
  • Sometimes I fear that compilation time belongs to this category too.

Overall, Rust is not that bad and I’ll keep developing NihAV using it but keep in mind it’s still far from being perfect (maybe about as far as C but in a different direction).

P.S. I also find the phrase “rewrite in Rust” quite stupid. Rust seems to be significantly different from other languages, especially C, so while “Real Programmers can write FORTRAN program in any language” it’s better to use new language features to redesign interfaces and make new overall design instead of translating the same mistakes from the old code. That’s why NihAV will lurch where somebody might have stepped before but not necessarily using the existing roads.

NihAV — Some News

July 30th, 2017

So, despite work, heat, travels, and overall laziness, I’ve managed to complete more or less full-featured Indeo 4 and 5 decoder. That means that my own decoder decodes Indeo 4 and 5 files with all known frame types (including B-frames) and transforms (except DCT because there are no known samples using it) and even transparency!

Here are two random samples from Civilization II and Starship Titanic decoded and dumped as PGM (click for full size):

I’m not going to share the code for transparency plane decoding, it’s very simple (just RLE) and the binary specification is easy to read. The only gotchas are that it’s decoded as contiguous tile aligned to width of 32 (e.g. the first sample has width 332 pixels but the transparency tile is 352 pixels) and the dirty rectangles provided in the band header are just a hint for the end user, not a thing used in decoding.

This decoder was written mostly so that I can understand Indeo better and what can I say about it: Indeo 4/5 is about the same codec with some features fit for more advanced codecs of the later era. While the only things it reuses from the previous frames are pixels and band transform mode, it can reuse decoded quantisers and motion vectors from the first band for chroma bands and luma bands 1-3 in scalability mode too. It has variable block sizes (4×4, 8×8 and 8×8 in 16×16 macroblock) with various selectable transforms and scans (i.e. you can have 2D, row or column Slant, Haar or (theoretically) DCT and scans can be diagonal, horizontal or vertical too). And there were several frame types too: normal I-, P- and B-frames, droppable I- and P-frames, and droppable P-frame sequence (i.e. P-frames that reference the previous frame of such type or normal I/P-frame). Had it had proper stereo support, it’d be still as hot as ITU H.EVC.

The internal design between Indeo 4 and 5 differs in small details, like Indeo 4 having more frame types (like B-frames and droppable I-frames) — but Indeo 5 had introduced droppable P-frame sequence; picture and band headers differ between versions but (macro)block information and actual content decoding is the same (Indeo 5 does a bit trickier stuff with macroblock quantisers but that’s all). Also Indeo 4 had transparency information and different plane reconstruction (using Haar wavelet instead of 5/7 used in Indeo 5). So, in result my decoder was split into several modules reflecting the changes: indeo4.rs and indeo5.rs for codec-specific functions, ivi.rs for common structures and types (e.g. picture header, frame type and such), ividsp.rs for transforms and motion compensation and ivibr.rs for the actual decoding functions.

As with Intel H.263 decoder, Indeo 4/5 decoders provide implementations for IndeoXParser that parse picture header, band header and macroblock information and also recombine back plane in case it was coded as scalable. In result they store not so much information, just the codebooks used in decoding and for Indeo5 the common picture information that is stored only for I-frames (in other words, GOP info).

In result, here’s how Indeo 4 main decoding function looks like:

    fn decode(&mut self, pkt: &NAPacket) -> DecoderResult<NAFrameRef> {
        let src = pkt.get_buffer();
        let mut br = BitReader::new(src.as_slice(), src.len(), BitReaderMode::LE);

        let mut ip = Indeo4Parser::new();
        let bufinfo = self.dec.decode_frame(&mut ip, &mut br)?;
        let mut frm = NAFrame::new_from_pkt(pkt, self.info.clone(), bufinfo);
        frm.set_keyframe(self.dec.is_intra());
        frm.set_frame_type(self.dec.get_frame_type());
        Ok(Rc::new(RefCell::new(frm)))
    }

with the actual interface for parser being

pub trait IndeoXParser {
    fn decode_picture_header(&mut self, br: &mut BitReader) -> DecoderResult<PictureHeader>;
    fn decode_band_header(&mut self, br: &mut BitReader, pic_hdr: &PictureHeader, plane: usize, band: usize) -> DecoderResult<BandHeader>;
    fn decode_mb_info(&mut self, br: &mut BitReader, pic_hdr: &PictureHeader, band_hdr: &BandHeader, tile: &mut IVITile, ref_tile: Option<Ref<IVITile>>, mv_scale: u8) -> DecoderResult<()>;
    fn recombine_plane(&mut self, src: &[i16], sstride: usize, dst: &mut [u8], dstride: usize, w: usize, h: usize);
}

And the nano-benchmarks:
the longest Indeo4 file I have around (00186002.avi) — nihav-tool 20sec, avconv 9sec plus lots of error messages;
Mask of Eternity opening (Indeo 5) — nihav-tool 8.1sec, avconv 4.1sec.
Return to Krondor intro (Indeo 5) — nihav-tool 5.8sec, avconv 2.9sec.
For other files it’s also consistently about two times slower but whatever, I was not trying to make it fast, I tried to make it work.

The next post should be either about the things that irritate me in Rust and make it not so good for codec implementing or about cooking.

A Bit about Airlines

July 25th, 2017

I did not want to have personal rants in my restarted blog but sometimes material just comes and presents itself.

As some of you might know, I prefer travelling by rail; yet sometimes I travel by plane because it’s faster. Most of those flights are semiannual flights to Sweden and an occasional flight to elenril-city. And here’s the list of unpleasant things I had with flights:

  • Planes being late for more than an hour (because of technical reasons) — two Lufthansa flights from Arlanda;
  • Plane being late just because — Aerosvit, once;
  • Baggage not loaded on plane — Cimber Sterling (aka Danish Aerosvit);
  • Flights cancelled because of strike — once SAS and once Lufthansa;
  • Flight being cancelled because of plane malfunction — Lufthansa once;
  • Flight being cancelled because they didn’t want to wait for the passengers — Lufthansa once (yup, people were waiting at the gate but they decided to skip boarding entirely and send the plane away without passengers);
  • Flight where I could not check in — Lufthansa once.

To repeat myself, most flights I make during the year are with SAS to/from Sweden though sometimes segment is operated by LH. So far return trips to Frankfurt with LH were mostly okay except for some delays but the last “flight” was something different.

I booked a flight FRA-PRG-FRA. The flight to Praha was cancelled because plane arrived to Frankfurt at least half an hour later than expected and after another hour it was decided it’s not good enough to fly again. Okay. So they could not find a replacement plane and rebooked me to flight at 22:15. Fine, but it turned out that I could reach Praha by train faster and cheaper (twice as cheap actually) so I decided not to wait.

Then the time for return flight came and I could not check in at all because they have modified something (that’s the message: “Cannot check in to your flight because of modifications, refer to Lufthansa counter.” And there’s no LH representative there. And if you can withstand their call centre, you’re a much better person than I am). So it was another train back to Germany (which also broke down in the middle of nowhere but at least it was resolved in an hour and a half). Maybe it’s because of the selected Cattle Lowcostish fare (Economy but without check-in baggage or seat selection) instead of the usual one but at least with SAS when I wasn’t able to take flight to Arlanda (because of Frankfurt Airport staff strike) I still had no problems flying back from there.

Call me picky (and I shan’t argue, I am picky) but I expect better statistics because the most irritating cases were happening with the certain company that I don’t fly with often and that’s comparable with SAS in quantity (but, sadly, not quality).

And that means I’ll avoid using it in the future even if that means not being able to get to some places by plane in reasonable time. There are still trains for me.

P.S. This rant is just to vent off my anger and frustration from the recent experience. And it should make me remember not to take Air Allemagne flights ever again.
P.P.S. Hopefully the next post will be more technical.

NihAV — Progress Report

June 25th, 2017

Obviously it moves very slowly: I spend most of my time on work, sleep, cooking and travelling around. Plus it was too hot to think or do anything productive.

Anyway, I’ve completed IMC/IAC decoder for NihAV. In case you’ve forgotten or didn’t care to find out at all, the names stand for Intel Music Coder and Indeo Audio software with IAC being slightly upgraded version of IMC that allows stereo and has tables calculated for every supported sample rate instead of the set of them precalculated for 22kHz. And despite what you might think it is rather complex audio codec that took a route of D*lby AC-3, G.722.1/RealAudio Cooker and CELT—parametric bit allocation codecs. It’s the kind of audio codecs that I dislike less than speech codecs but more than the rest because they have large and complex function that calculates how many bits/values should be spent on each individual coefficient or subband. In IMC/IAC case it gets even worse since the codec uses floating point numbers so the results are somewhat unstable between implementations and platforms (a bit more on that later). Oh, and this codec has I- and P-frames since some blocks are coded as independent and others are coded using information from the previous block.

Rust does not have much to do with C so you cannot simply copy-paste code and expect it to work and it’s against the principles of the project anyway. Side note: the only annoying Rust feature so far is array initialisation, I’d like to be able to fill array in a loop before using it without initialising array contents to some default value (which I can’t do for some types) or resorting to mem::uninitialized() and ptr::write(). Anyway, I had to implement my own version of the code so it’s structured a bit differently, has different names, uses bitstream reader in MSB16LE mode instead of block swapping and decodes most files I could test without errors unlike libavcodec—so it’s NIH all the way!

I wasted time mostly on validating my code against the binary specifications so this version actually decodes most files as intended while libavcodec fails to do that. To describe the problem briefly, it all comes from the same place: the codec first produces bit allocation for all bits still available then determines how to read flags for skipping coefficients in some bands, reads those flags and adjusts bit allocation for the number of bits freed by this operation; the problem is that bit allocation may go wrong and in result skip flags take more bits than the coefficients that would be coded otherwise and decoder would fail to adjust bit allocation for that case (it’s not supposed to do that in the specification) and will read more bits than the block contains. For the thirty-something IMC and IAC in AVI samples only one fails now for me because in bit allocation the wrong band gets selected for coefficient length decreasing. And the reason is the difference in the fourth or fifth digit after the decimal point in one array of constants that makes the wrong value minimum (and thus selected for coefficients length decreasing). Since it takes several minutes with gdb+mplayer2 to get information at this point (about at 10-second position in 14-second audio) I decided not to dig further.

Also I had to write other pieces of code like split-radix FFT, byte writer and WAV dumper that accepts audio packets and writes them with the provided ByteWriter.

P.S. Nanobenchmarks ahoy: decoding the longest IMC stream that I had (a bit more than two minutes) takes 0.124s with avconv and 0.09s with nihav-tool. Actual decoding functions take about the same time though Rust implementation is still faster by couple percents and my FFT implementation is slower (but on the other hoof it’s called for every frame since it decodes that file without errors).

P.P.S. So next is Indeo 4/5 with all wonderful features like scalable decoding, B-frames and transparency (that reminds me that Libav and ScummVM had a competition who would be the last to implement proper transparency support for Indeo 4, now they both might win). And then I’d probably go back to implementing the features I wanted: being able to tell the demuxer to discard and don’t demux certain streams, better streams reporting from the demuxer, seeking and decoder reset, frame reordering functionality, maybe WAV support too. And then maybe back to decoders. I want to have several codec families fully implemented, like RAD (Smacker, Bink and Bink2), Duck/On2 (TM1, TM-RT, TM2, TM2X, TM VP3, VP4, VP5, AVC, VP6 and VP7) and RealMedia (again). But I’m not in a hurry.

P.P.P.S. I’m not going to publish all source code but bits of it may be either posted when relevant or leaked to rust-av, its developer(s) has(have) shown some interest, so enquire there.

NihAV — A New Decoder

June 10th, 2017

After a lot of procrastination I’ve finally more or less completed decoder for I.263 (Intel version of H.263) in NihAV.

It can decode I-, P- and PB-frames quite fine (though B-frames have some artefacts) and deblock them too (except B-frames because I’m too lazy for that). Let’s have a look at the overall structure of the decoder.

Obviously I’ve tried to make it modular but not exceeding the needs of H.263 decoder (i.e. I’m not going to extend the code to make it work with MPEG-2 part 2 and similar though some code might be reused), so it’s been split into several modules. Here’s a walk on all modules and their functionality review.
Read the rest of this entry »

NihAV — Format Detection

June 4th, 2017

So I’ve decided to implement container format detection for NihAV. This is a work of progress and I’m pretty sure I’ll change it later but it should do for now.

The main principles are quite simple: formats are detected by extension and by the contents, so there’s a score for it:

pub enum DetectionScore {
    No,
    ExtensionMatches,
    MagicMatches,
}

I don’t see why some format should not be detected properly if demuxer for it is disabled or not implemented at all. So in NihAV there’s a specific detect module that offers just one function:

pub fn detect_format(name: &str, src: &mut ByteReader) -> Option< (&'static str, DetectionScore)>;

It takes input filename and source stream reader and then tries to determine whether some format matches and returns format name and detection score on success (or nothing otherwise). I might add probing individual format later if I feel like it.

Before I explain how detection works let me quote the source of the detection array (in hope that it will explain a lot by itself):

const DETECTORS: &[DetectConditions] = &[
    DetectConditions {
        demux_name: "avi",
        extensions: ".avi",
        conditions: &[CheckItem{offs: 0,
                                cond: &CC::Or(&CC::Str(b"RIFF"),
                                              &CC::Str(b"ON2 ")) },
                      CheckItem{offs: 8,
                                cond: &CC::Or(&CC::Or(&CC::Str(b"AVI LIST"),
                                                      &CC::Str(b"AVIXLIST")),
                                              &CC::Str(b"ON2fLIST")) },
                     ]
    },
    DetectConditions {
        demux_name: "gdv",
        extensions: ".gdv",
        conditions: &[CheckItem{offs: 0,
                                cond: &CC::Eq(Arg::U32LE(0x29111994))}],
    },
];

So what is the way to detect format? First the name is matched to see whether one of the listed extensions fits, then the file contents are checked for markers inside. These checks are descriptions like “check that at offset X there’s data of type <type> that (equals/less than/greater than) Y”. Also you can specify several alternative checks for the same offset and there’s range check condition too.

This way I can describe most sane formats, like “if at offset 1024 you have tag M.K. then it’s ProTracker module” or “if it starts with BM and 16-bit LE value here is less than this and here it’s in range 1-16 then this must be BMP”.

One might wonder how well it would work on MP3s renamed to “.acm” (IIRC one game did that). I’ll reveal the secret: it won’t work at all. Dealing with raw streams is actually beside format detector because it is raw stream and not a container format. You can create raw stream demuxer, then try all possible chunkers to see which one fit but that is stuff for the upper layer (maybe it will be implemented there inside the input stream handling function eventually). NihAV is not a place for automagic things.

#chemicalexperiments — Cheese Cakes

June 3rd, 2017

This is rather controversial topic because different countries recognize different kinds of cheese let alone what can be made out of it so what bears the name “cheese cake/pie” in one country might be not recognized as such in another.

So, cheese. Depending on country you have either one or two categories of cheese recognized: so called cottage cheese (or Quark/kvarg in Germanic language countries) and the rest of hard or semi-hard products made of milk. There’s also Italy where some cheeses (like mozzarella, provolone or scamorza) are considered to be pasta but that’s Italy and it doesn’t deserve second mention in this post.

Cottage cheese can be also divided into two categories: grainy and homogeneous mass. The first kind is more common in Eastern Europe (I’ve seen it in Ukraine, Czechia and Hungary for example; it can be also found in Germany but only in rather small packaging and runny), the second kind is more common in Germany.

The conventional hard or semi-hard cheese can be made into a pie usually by grating it, mixing with cream and eggs or sour cream and baking.

And of course there’s USA where what they call cheesecake is made (if you believe Wickedpedia) from either cream cheese (i.e. product where cheese-making process was terminated halfway) or ricotta (made from whey instead of milk, so not a cheese either).

Now, let’s look at real cheese cakes/pies I’ve encountered so far or even made myself:

  • Ukraine — there’s a traditional Ukrainian dish сирники, patties made from grainy cottage cheese mixed with semolina or millet and flour and fried. Those I like and approve;
  • Germany — there are two similar variation of what is called käsekuchen(literally cheese cake). In both cases it’s mostly Quark (homogeneous cottage cheese) mixed with semolina and baked, in one case they’re also made more cake-like by mixing milk and starch and adding pieces of tangerine. This variation I bake myself time from time, it goes even better with a bit of sour cream (Schmand) or gräddfil on top;
  • Switzerland — there they have Chäschueche(essentially käsekuchen pronounced in Swiss German) which is obviously nothing like its German counterpart. Instead we have a small tart made from local chäs(semi-hard semi-sticky Swiss cheese with stinky rind) that’s rather savoury instead of sweet. I’ve tried them once, found them edible but not something spectacular;
  • Sweden — this country has ostkaka(literally cheese cake) which can be described as an interesting cheese that was too good to wait for it so it was baked instead of ripening all the way. Obviously I buy it when possible and eat with lingon jam, there’s especially good version available in Jul season;
  • Sweden — there’s not enough of it! Sweden also offers västerbottensostpaj(or simply västerbottenpaj) which is a quiche-like pie with filling made from the best cheese in the world (from Burträsk obviously) combined with eggs and cream (I should try gräddfil instead) and baked. I enjoy them both in Sweden and sometimes bake it myself (when I have The Cheese) because it’s worth it.

And an the end several fun facts:

  • German name for cottage cheese (Quark) is most likely the one that got into Finnegans Wake, from which it was borrowed later for certain physical term (though physicists playing stringed models refuse to acknowledge that concept);
  • in Czechia grainy cottage cheese (tvaroh) is sold in pressed triangles, if you wrap a cabbage leaf around it you can troll Japanophiles that it’s local onigiri (like I did once);
  • in Sweden they actually have different names for grainy cottage cheese (called “cottage cheese”) and homogeneous one (called “kvarg”);
  • and in Ukraine it’s all called simply “cheese” (maybe because hard cheese was not common in Ukraine, only hard cheese-like product sold in Soviet times);
  • another fun fact from Ukraine—cottage cheese sold on markets by individuals varies in units depending on region: in some places it’s sold by weight, in some places it’s sold by volume (using standard half-litre jars for example), in some places it’s sold by saucers (i.e. how much of it you can put on a saucer) and in other places it’s sold by amount yielded from 3 litres of milk.

Okay, back to doing nothing.