An Impression on Rhaetian Railways

April 25th, 2018

Since I don’t have enough time to visit proper country I went to bad substitute of Sweden that’s much more accessible—Switzerland (it should be obvious why I cannot call it poor or cheap substitute). Since it happened on Easter (April 1-2), the environment was resembling Sweden: snow, mountains, deer and log sheds. And of course I could ride trains in new locations!

Rhaetian railways is a narrow-gauge railway system in canton Graubünden (which symbol uncannily resembles the one from Gävle), a fractal part of Switzerland occupying its south-east corner (fractal in the sense that canton shape looks almost exactly like the shape of whole Switzerland). Trains run in a picturesque scenery with dreadful names like Fhtagn (or Ftan in Swiss-Cthüelsch) or SaaS (they really have a station with such name!), going up to the mountains (in 1-2 km above sea level range) and I spent couple of days travelling around.

But while the scenery is okay, the railways are some unholy mix of Berlin S-Bahn, Czech and German railways:

  • There are German ICEs running there all the way to Chur (so I could travel home without any transfers);
  • The tracks are curvy and trains are as slow as in Czechia (i.e. no matter where you go it will take you at least an hour or two to get there);
  • Prices are like in Czechia too except they use Swiss Francs instead of Czech Koruna—but numbers are about the same (so it seems I can ride with ICE here cheaper, faster and on much longer distance than with RhB);
  • Another thing like in Czechia: buying a ticket with a card involves 1,5€ surcharge. No such thing in Sweden;
  • Narrow-gauge trains are a weird mix themselves: they can put locomotive in the front of the train, in the end (maybe), in the middle (very common) or just couple a typical EMU with a number of conventional rail carriages (I’m not sure I’ve seen that anywhere else);
  • Weird station names: I can understand when you name a station after two places at once like Reichenau-Tamins (that’s common in Germany too) or even if you name it after the same place twice like Disentis/Mustér (it’s Confoederatio Helvetica, natives can’t agree on a single name for anything) but Tavanasa-Breil/Brigels is definitely too much (it’s a station between those two mentioned earlier BTW);
  • It’s afraid of snow: after even insignificant amount of snow they stop going on some routes: on my stay there the trains on Pontresina-Tirano and Disentis/Mustér­-Andermatt routes were cancelled for indefinite amount of days. In Germany trains are more punctual—if they are late they’re late for dozens of minutes, not days. And if something bad happens and trains can run some route for days then you can see information everywhere including how to get around and such. No such thing in Switzerland;
  • And another thing that’s taken from German S-Bahn is timetables and tickets. This requires a separate rant.

Overall, FFS or RhB is not very friendly to a traveller: you should have a definite idea where are you going to, when (at which time and such) and how (i.e. where to transfer) if you want to buy a ticket. For example, I was at the station Chur-West and wanted to go to Scuol-Tarasp. The ticket vending machine offered me to choose from three options: via Samedan, via Chur-Samedan (i.e. go first to Chur main station and from there to Samedan and then to Scuol) or via Vereina. The last option is actually a tunnel and not a station name!

In Germany when you travel with long distance trains you actually choose one of the provided connection possibilities (e.g. InterCity from A to B, RegioBahn from B to C and ICE from C to D or InterCity from A to E and then from E to D) or you can use the provided functionality for route planning even if you don’t buy a ticket. SBB ticket machines simply allow you to buy ticket from A to B maybe with cryptic route midpoint and that’s all! That’s exactly how German ticket vending machines for regional transport work. And there’s yet another point of annoyance: Swiss rail timetables fail to include arrival time for the final destination so if you care about it (like I sometimes do) you have to find it out via other means. It’s plain stupid.

Oh, and the snow-related problem: when you buy a ticket you can’t be sure the train will go there because the only cryptic warning I got is when ticket machine said my ticket will be valid on April 1st-April 9th period (and much later in the train too). In Germany it actually shows warnings when there’s some problem with a train or it’s cancelled entirely (since you can use it later). I actually had a situation when one segment of my travel was served by a train that broke down and I had to take another train later instead. So it feels like you should rather use smartphone and buy ticket online where you can see the actual route and warnings (and probably use bahn.de instead of cff.ch too where possible).

Overall, travelling with Rhaetian railways was both a pleasant and exciting experience in some aspects (i.e. when I was inside the train) and confusing and frustrating experience in others (i.e. when I actually tried to buy a ticket). They also boast how some parts of the system are the third railway in UNESCO World Heritage Railways (the second after India, I guess) and how picturesque some parts are (they are almost as interesting as Sauschwänzlebahn indeed) but as I’ve seen it all there’s no reason to return there (and the reliable source says there are better places in Switzerland to wait over heat waves too).

#chemicalexperiments Dough and Pancakes

April 14th, 2018

Since I don’t have any urges to work on NihAV at this moment (big surprise, I know) I’ll talk about cooking instead.

Since I don’t know how to cook and never had any kind of culinary education, I divide dough into three main categories: puffy (the one that expands while baking), non-puffy (the one that keeps about the same volume) and runny (usually used for pancakes but we’ll talk about them later).

Non-puffy dough is the easiest to make: just mix flour and water (take either boiling water or very cold water for good results). Ideal for simple filling dishes like вареники or Karelian rice pasty (I made both and shall probably make again). The next level is to make so-called shortcrust pastry which is used for pies, quiches and such. Here you usually mix flour with some fat and/or filling (called shortening).

And there we have a variety of what to use for shortening:

  • classical recipes use butter—I’ve cooked stuff using it and it works fine except that it takes too much butter to my liking;
  • French people obviously prefer margarine (since it’s their invention)—I see no reason to try it;
  • Brits prefer some weird animal fat called suet—I feel queasy just thinking about it so it gets definitive no from me;
  • USians use chemically processed vegetable shortening; I’ve tried it once: ordered a can of Crisco shortening, followed the recipe for pie crust and the result is bad. I’d stick to other two recipes listed here. Fun fact: while searching for it on Amazon most offers were from sex shops where it’s apparently offered as a lubricant. I can see why—that stuff is sticky and slick and not fit for baking. Also since one of the sellers offers it along with various sweets (and what passes for them in the USA) I’ve ordered some of those and tried it—I was not impressed by that stuff either.
  • and finally there’s German variant that I find very good called Öl-Quark Teig (dough made from oil and quark—in this case lean homogenous cottage cheese). You mix flour with several spoonfuls of oil (you can choose different oil for different flavouring of course, which is a nice feature) and magerquark (lean homogenous cottage cheese) and that’s all! You can add an egg and/or baking powder too but it’s fine as is too.

Puffy dough is the trickiest one—the puffiness comes from bubbles in the dough and it takes extra effort to do that. The easiest way is simply to add baking powder (or baking soda reacting with vinegar) to the dough, the other conventional ways are to prepare yeast (cultured or uncultured, either way it takes time and some effort) or make bubbles from eggs which requires some skill that I lack (so I stick to baking powder). There are two recipes that work for me: mixing flour, eggs, butter and sugar (aka the usual cake mix) or öl-quark dough with sugar, egg and baking powder.

Runny dough (is it called batter?) can be made by mixing flour with a lot of liquid and some eggs and then used to make pancakes. Since it’s the only thing I’ve done with it so far let’s talk about them.

There are several kinds of pancakes that I know and tried so far:

  • French-style thin pancakes (aka crêpes) that are better eaten fresh with something rolled in;
  • Dutch laughably small pancakes (that have a name almost like an Australian word for gay—probably the words have the same origin);
  • common pancakes—thicker than crêpe, plain, good to eat with something on top or with some filling rolled in;
  • slightly thicker pancakes with something embedded in them (like bits of ham).

And of course Sweden has nice varieties of pancakes in wide range: ordinary pancakes, pancakes with bits of ham, pancakes with potatoes (I tried those and approve) and pancakes for people like me who can’t do anything right with their hands (including flipping pancakes)—ugnspannkaka, i.e. pancakes baked in oven. Obviously that one is much thicker than the rest but it’s easy to make (even I baked some) and it can embed various stuff too which makes it interesting (bits of ham, fish or even fruit). Also this way you’re more likely to end with rectangular pancakes which I find to be a nicer and more versatile shape than usual round ones.

I forgot to mention one local thing—in Baden-Württemberg they have plain pancakes shredded into thin stripes, dried and then they add it to the served soups. It’s called Flädle and you can buy it in every local supermarket (even Aldi). It’s a nice addition to a soup IMO.

Okay, now back to doing anything but coding.

Rust in multimedia: unwieldy features

March 18th, 2018

Today I wanted to talk about two features that are quite important for multimedia decoding but are quite inconvenient in the current state.

First, macros. I know that macros in Rust are both very powerful and quite flexible but they are hard to use for data definition and I ranted about it before. The problem is that quite often you have tables with some internal structure that would benefit from macro substitutions: if you have a codebook constructed from entries following patterns like a, b, -a, -b and a, b, a, -b, -a, b, -a, -b it would be easier and less error-prone to represent them as e.g. FLIP2#(a, b) and FLIP4#(a, b) inside the data definition. The problem is that macro! does not allow you to do that easily since it’s supposed to expand into valid statements (i.e. code or full data definitions). Of course you can work it around by making a set of macros to define the whole array and some bits inside it but that’s what makes it unwieldy. And that’s why I believe there should be another macro substitution mechanism, maybe named macro#, that would work just on data but it’d be much easier to use in that particular case.

The second issue is assembly integration. Despite Rust being fast and such it’s still better to write small critical functions in assembly. And obviously it would be better if Cargo supported including assembler files into crate. You can point out there’s stdsimd for using the power of SIMD without much hassle. I can point out that compiler-generated code is still far from being perfect even with intrinsics and assembly is still better; supporting querying SIMD capabilities via standard package is good though. And you can point out that there’s a special crate for supporting various files with various compilers/assemblers already. I’d say that it’s a bit too generic but at least it can serve as a base for what I need. Again, there’s more or less standard way to deal with assembly files so making a common standard is not hard.

And in the unlikely case somebody reads this and asks why I don’t form an RFC—from what I heard it involves proposing code as well and I don’t want to study the compiler nor waste days compiling it.

NihAV: Aten’t Dead

March 3rd, 2018

Surprisingly, there’s still some life in NihAV and some progress time from time.

So I’ve debugged RealVideo 2 decoding and verified B-part of PB-frame reconstruction in Intel.263 decoder against the binary specification. Mind you, the latter is not likely to be seen supported by libavcodec ever. First, it’s a fringe feature for extremely old video codecs nobody cares about any more and, second, unlike later codecs, B-part is stored along with P-frame data (i.e. first you have macroblock header for P- or I-macroblock, then macroblock header for B-macroblock, then macroblock coefficients for P-part and then macroblock coefficients for B-part). Other codecs simply pack B-frame along with reference frame but here data is interleaved. I’ve added some support for skipping B-part in libavcodec H.263 decoder (exactly nine years ago!) but decoding two frames in parallel would require some serious hacking of infamous MpegEncContext-using core so it’s very unlikely to happen.

And directions for near future still include RealVideo 3/4 and all RealAudio codecs. Fun fact: two of those are patent-free now—ATSC A/52 aka DNET and AAC-LC (but probably not SBR extension used in racp version). So if you implement them now you can flip a middle finger to both D*lby and Ferkel-herzen-Gesellschaft since new decoders can’t be covered by patent licenses. Not that I cared about it before.

Chiariglione Is Right

February 7th, 2018

I guess everybody else has reacted on his post about MPEG crisis so I can do that as well. So, $postname—most people just don’t understand his outlook. If you interpret his words from his point of view it’s clear he’s right for most of the things.
Read the rest of this entry »

ClearVideo: Somewhat Working!

February 3rd, 2018

So I’ve finally written a decoder for ClearVideo in NihAV and it works semi-decently.

Here’s a twentieth frame of basketball.avi from the usual sample repository. Only the first frame was intra-frame, the rest are coded with just the transforms (aka “copy block from elsewhere and change its brightness level if needed too”).

As you can see there are still serious glitches in decoding, especially on bottom and right edges but it’s moving scene and most of it is still good. And the standard “talking head” sample from the same place decodes perfectly. And RealMedia sample is decoded acceptably too.

Many samples are decoded quite fine and it’s amazing how such simple method (it does not code residue unlike other video codecs with interframes!) still achieves good results at reasonable (for that time) bitrate.

Hopefully there are not so many bugs in my implementation to fix so I can finally move to RealVideo 3 and 4. And then probably to audio codecs before RealVideo 6 (aka RealMedia HD) because it needs REing work for details (and maybe wider acceptance). So much stuff to procrastinate!

Update: I did MV clipping wrong, now it works just fine except for some rare glitches in one RealMedia file.

ClearVideo: Some Progress!

January 21st, 2018

I don’t know whether it’s Sweden in general or just proper Swedish Trocadero but I’ve managed to clarify some things in ClearVideo codec.

One of the main problems is that binary specifications are full of cruft: thunks for (almost) every function in newer versions (it’s annoying) and generic containers with all stuff included (so you have lists with elements that have actual payload which are different kinds of classes—it was so annoying that I’ve managed to figure it all out just this week). Anyway, complaining about obscure and annoying binary specifications is fun but it does not give any gain, so let’s move to the actual new and clarified old information. Plus it has several different ways of coding information depending on various flags in extradata.

The codec has two modes: intra frames coded a la JPEG and inter frames that are coded with fractal transforms (and nothing else). Fractal frame is split into tiles of predefined size (that information is stored in extradata) and those tiles may be split into smaller blocks recursively. The information for one block is plane number, flags (most likely to show whether it should be split further), bias value (that should be added to the transformed block) and motion vector (a byte per component). The information is coded with static codebooks and it depends on the coding version and context (it’s one set for version 1, another for version 2 and completely different single codebook for version 6). Codebooks are stored in the resources of decoder wrapper, the same as with DCT coefficients tables.

Now, the extradata. After the copywrong string it actually has the information used in the decoding: picture size (again), flags, version, tile sizes and such. Fun thing is that this information is stored in 32-bit little-endian words for AVI but it uses big-endian words for RealMedia and probably MOV.

And the tables. There are two tables: CVLHUFF (single codebook definition) and HUFF (many codebooks). Both have similar format: first you have byte array for code lengths, then you have 16-bit array of actual codewords (or you can reconstruct them from code lengths the usual way—the shortest code is all zeroes and after that they increase) and finally you have 16-bit array of symbols (just bytes for case of 0x53 chunks in HUFF). The multiple codebook definition has 8-byte header and then codebook chunks in form [id byte][32-bit length in symbols][actual data], there are only 4 possible ID bytes (0xFF—empty table, 0x53—single byte for symbol, the rest is as described above). Those IDs correspond to the tables used to code 16-bit bias value, motion values (as a pair of bytes with possible escape value) and 8-bit flags value.

So, overall structure is more or less clear, underlying details can be verified with some debugging, and I hope to make ClearVideo decoder for NihAV this year. RMHD is still waiting 😉

Popular Swedish Bus Routes

January 9th, 2018

Sweden has a lot of local bus routes and every region (or län) has its own most popular bus route:

  • for Stockholm and Örebro län it’s “Ej i trafik” (something like “not participating in public transit service”, “trafik” in Swedish often means both [car] traffic and public transport service);
  • for Södermanland it’s “Är ej i trafik” (“Is not in service”);
  • in Östergötland it’s “Tyvärr, ej i tjänst” (“Sorry, not in service”).

The joke is that while there are many numbered bus routes (hundreds in Stockholm län), the regulations make bus drivers rest after completing a route so quite often a bus arrives to the end station, unloads all passengers, changes its route number to the one above and goes away; then, obviously, another bus (or the same one after the driver has rested) comes to pick passengers. Since I almost never travel by bus in Germany (we have trams here after all), most of my bus trips happened in Ukraine and Sweden—and those countries differ in approach to drivers indeed.

Another interesting thing is the variety of buses: in Stockholm län you have buses going on trunk lines—quite often those are articulated buses and they’re always painted blue—and ordinary buses (always red); some buses are double-deckers, like on bus route 676 (Stockholm-Norrtälje) and some coaches are double-deckers too (I still fondly remember travelling on top floor of one from Luleå to Sundsvall—no fond memories about Ukrainian bus trips though). And in Norrland they still have skvaders (aka buses with additional cargo departments). Also buses in Stockholm län quite often have USB chargers for every seat and even WiFi—everything for passenger comfort.

It’s quite interesting that some bus routes are operated by two buses: for example, if I want to get from Bromma to Portugal (a place on Adelsö island near Stockholm) I’d take bus 312 which goes to Sjöangen, there I’d step out, get into new bus 312 waiting there while the previous bus goes to the rest. Also it’d travel on a ferry which I also like for some reason.

So there’s something interesting about Swedish buses after all. But railways are still much better (more comfort, higher speeds, less problems from car traffic etc etc) and definitely more awesome (I’ve witnessed rail bus pushing a fallen fir from the tracks less than a week ago—try finding an ordinary bus doing that). But it’s still nice to know that Sweden has good things beside people, trains, food, drinks and nature.

P.S. This seems to have gone a bit further than just describing how popular bus routes differ in various Swedish regions. Hopefully my upcoming NowABitClearerVideo post would go the same way.

Dingo Pictures Works: Early Years

December 8th, 2017

Well, I intended to end my review but I was reminded that there are even more Dingo Pictures works that I’ve missed. So let’s look at those.
Read the rest of this entry »

Rust: Annoyance-Driven Design

December 3rd, 2017

I’ve finally made NihAV decode RealVideo 2 content, including B-frames (there are still 4 video codecs to support (and I don’t have any samples for RMHD) and all audio codecs too so it’s a long way) and so I have some more words to say about Rust and my experience with it.

To me it looks like the most decisions on decompositions in Rust are the consequences of annoyance of making it other way? Too large structures mean you have to either pass too many arguments into new() or fill it with some defaults (and I’m pretty sure that #derive[Default] won’t save you with more complex types) and initialise to sane values later. In result it’s easier to split everything into smaller structures which are (at least) subjectively are much easier to handle, especially if you reference them as Option<YourStruct>. Modules and imports, on the other hoof, are more annoying to manage since you have to take care of proper dependencies, visibility and imports—in result I find it easier to import all stuff from all modules and just keep comment out currently unused imports (because I still can’t bring myself to make it all a single mega-module). And now for the even higher level: crates. Yes, I’m going to beat that undead horse again.

First of all, I’m aware of incremental building enabled in nocturnal Rust but I’m not going to use nightly for rather obvious reasons (mostly because I’m not here to experiment with the all potential bells and whistles of the language but rather what it can offer right out of the box and how it suits my needs). So, the compilation times are horrible: when I change a single non-public function it rebuilds the whole crate (which is supposed behaviour, I know) and it takes 15 seconds to do that. Obviously it’s laughable for people doing “serious” projects but it’s basic fact that humans expect response (any response) in about five seconds after the action or they get impatient. In result instead of one crate with optional features (in my case decoders and demuxers) I’d rather have several smaller crates and that creates new issues too. There’s this obvious npm.js kind of issue of making packages for every small thing so your programs ends with more package dependencies than modern Linux distribution. But there’s also the issue with package splitting: I’d like to split my code into packages that encompass certain family of features—e.g. nihav-core for common stuff, nihav-avi for AVI demuxer, nihav-indeo for all Indeo codecs (audio and video) and nihav-realmedia for RealMedia demuxer and related codecs—then some of them may depend on some common package (like H.263 common core for Intel I.263 and RealVideo 1 and 2 decoders) but probably with different features requested (one of them does not need B-frame support, another one does not need PB-frame support). Since I don’t know quantum cargodynamics I don’t know how it will all be resolved. So it will either end in dead code or code duplication (in an additional crate too, I suppose).

My theory is that people behind Rust are biased by their development environment. In other words you don’t care much about compilation times when you have to build browsers (or compilers) on daily basis. While my main development machine is a laptop I bought in 2010 with 8GB of RAM (which I believed to be future-proof). So the Rust language designers might either have beefy machines to deal with fast compilation or be conditioned to long development cycles. I know that back in the day “start compiling Linux kernel and go make some coffee to pass 45 minutes of compilation time” was quite common but I guess it’s Jevons’ paradox all over again: the more computing power is there the more it’s wasted on compilation times. Like modern C++ or single-header libraries: you actually have to compile a very large corpus of code as single file. Back in the days my laptop with 64MB RAM was spending most of the time compiling libavcodec/dsputil.c (a monstrous file full of templates that old FFmpeg developers might remember even today) so I had to install more RAM in order to make compilation time reasonable. The solution was to split the file instead of upgrading the machines for every developer but nowadays it’d be seen as a ridiculous solution.

And now documentation. I find it rather poor (but that’s common with programming languages). If I know more or less what feature I want I can find it in the standard documentation (if I don’t I would complain about non-overlapping multiple &mut [range] borrows not working instead of using slice.split_at_mut()—and I did) but it does not really tell me what I should be looking for in the first place. I call it Excel complexity. In Excel there’s probably a function that does anything you want but it’s much easier to reimplement it yourself than to look up in the documentation how it’s called and what are its less obvious parameters. And even if you combine both The Rust Programming Language Second Edition and Rust By Example you still won’t get it right. Now that Rust aspires to be a JavaScript replacement it should take an example from it too: provide extensive overview how to do things in it instead of showcasing features. IMO in TRPLv2 there are two chapters—11 and 12—that are close to that ideal: they talk about testing and how to make a console program. In other words, good practical tasks that one would like to achieve with Rust (in other words, not so many people care about features per se, they want something done with a language: build multi-threaded application, parse Web server reply, make an efficient number cruncher etc etc). I can rant more about how it should be organised but nobody reads documentation including me.

There’s still this annoyance with tuples as such too: why I can’t declare let foo, bar; if baz { foo = 4; bar = 2; } else { foo = bar = 0; } and have to use two separate lets? why I can’t have let (foo, bar); if baz { (foo, bar) = (4, 2); } else { (foo, bar) = (0,0); } either? In result while named tuples are there I end up using only unnamed tuples.

So while Rust offers some nice things it has not a very nice way to shape development. And this also explains why C was so popular and still is: it does not enforce any particular behaviour on you (except in recent editions when the standard and compilers suddenly started to care about arithmetic and bit operations being non-portable—you might make your own CPU that does not use two’s complement arithmetic after all), no enforced coding style, you can compile code in any order you like and interface almost anything without special tools or wrappers. And the freedom it offered along with effectiveness is what is often lacking in more modern languages (the saddest thing is that it’s traded not for memory security but rather for sacks of syntactic sugar).

Anyway, I’ll keep experimenting and we’ll see how things will turn out. In either case I should start thinking about splitting NihAV into several crates, registering codecs and such. Too much work, too many opportunities to procrastinate!