Archive for the ‘Useless Rants’ Category

(Semi) interactive movies — less explored multimedia niche

Tuesday, December 2nd, 2025

Here’s something worth reminding of, something I’m not going to tackle myself and it’s not likely somebody else will pick up. I’m talking about an intermediate form between static videos and interactive games, which I called (semi) interactive movies.

The best-known representative of those is Flash. Some of you old enough may remember it as a source of annoyance in form of extremely animated ad banners, some of you not so old may remember Flash games, younger people may remember it as an annoying web player that every video hosting required until they all got rid of it (reportedly by a whim of one man). But there was a time when Flash was used for its original purpose: creating animations (often bundled with a stand-alone player) or slightly interactive animations (like a blender with several buttons you can press and depending on it a different animation will play; it was gory and disgusting, it’s better to watch something nice and cute like Happy Tree Friends instead).

Of course Flash is reasonably well supported by open-source software to the point you can even run it in the browser without installing a plug-in. But there were more and not all from the same company.

Macintosh had MacroMind Director which allowed to create interactive applications a la HyperCard. Later it was ported to Windows as well and was used to create more or less portable games or activity centres and such. Eventually it was bought up with its sibling projects and compressed into shitbrickmud-brick, but meanwhile it managed to spawn several variations. There were “Gaffed Director movies” (usually bundled with stand-alone player), there were RIFF Multimedia Movies (in .mmm files), I’m pretty sure there more flavours of it too. Rather recently ScummVM started to support some games made with Director (which is not that easy task considering different engine revisions and that it often relied on platform-specific plug-ins for extended functions like playing videos or CD audio—up to decrypting data). So if you want to have support for the formats I mentioned the best course would be probably to write a new player based on their code-base (because it’s probably the best foundation to start from and ScummVM is for playing games, not this kind of content).

Another similar system I’m aware of is Brilliant Digital Entertainment’s Multipath Movies. I haven’t played any of them but from the description it sounds a lot like those DVD menu-based games. The engine seems to be under one megabyte and its data is self-contained (one large .bhf for the content plus auxiliary .map and .nav files—a striking difference from Director with its .cxt and .dxr files referencing each other and all other external resources).

There are so many old neglected formats that deserve to be preserved for the posterity, here I hope to remind people at least of some.

P.S. Why wouldn’t I implement it? Exactly because it’s not the kind of multimedia I work with. Mostly I write a decoder and I can do whatever I want with the decoded output. Here one has more to mind inner engine workings: interpreting engine script, synchronising output composed mostly of sprites, mind potential user input and such. And hardly any work on figuring how how video is compressed. So if you want to be the next open-source software hero—well, here’s a good challenge for you.

History versus historical claims

Sunday, November 30th, 2025

Occasionally I watch videos about old games and gaming history (because there might be an old adventure game I have not played or an old game with a custom video format I haven’t looked at). And sometimes I see the claims about somebody doing stuff first which are not true—usually because the competition is gone or only a small group of fanboys cares, so there’s nobody to disprove their claims. One such example would be the cult status of ZX Spectrum in late USSR and post-Soviet countries: it was essentially the only microcomputer that enthusiasts could afford (except for i8080/i8086 clones that were even worse), as the result it was praised for its uniqueness, revolutionary innovations and such; when USSR dissolved, IBM PC has won the market already and the systems from e.g. Commodore were virtually unknown. IMO ZX Spectrum had two advantages: partial compatibility with i8080 thanks to Zilog Z80 CPU and cheap and simple chipset that even USSR could clone and produce. The rest was subpar or shitty (especially its graphics that rivalled CGA in being eye-gouging bad)—but hardly anybody had experience even with Bulgarian Apple II clones let alone something better. So this rant is dedicated to similar situations with both computer hardware and software.

Let’s start with hardware.

For starters, a lot of people believe that CPU was invented by Intel while in reality it’s not just TI that beat them to it, there are many other predecessors as well. See this article for more details and arguments. But considering how many CPU companies have been around and surviving since 1970s it’s no wonder people forget about the no longer existing competitors. Similarly if you know what DEC Alpha did back in the day, Pentium achievements do not look that impressive.

Similarly Nvidia claims to have invented GPU—with a definition tailored to point at their product. Who cares about SGI, 3dfx, or even Argonaut Games with their Super FX chip? Apparently people still remember it but for how long…

And speaking about game consoles, here’s a nice video pointing out some things widely believed to have been invented by N*nt*nd* while they were not.

Let’s move to software and games specifically. There’s often-repeated claim about Lucasfilm Games being a pioneer in adventure games genre by introducing a scripting engine to drive it. To me that is a perfect example of a claim people believe because others were too modest to advertise the fact. While ADVENT(URE) was written in FORTRAN, its re-imagining Zork was later re-written in ZIL—a custom language for its own virtual machine designed specially for text adventures. What’s more, Ken Williams designed a custom engine for his wife’s game (which was the first graphical adventure game BTW) and later (Sierra) On-line adventure games (and sometimes other kinds of games too) have been running on some customisable scripting engine (ADL was replaced with AGI, AGI was replaced with SCI, SCI had been in use until the company was effectively killed). Similarly this post about The Secret of Monkey Island has this passage: “Back then, the randomly generated forest was cutting edge technology. Disk space was at a premium.” One can even get an impression it was something special invented by them—until you read this post from Al Lowe which describes how he’s done it in a games released a year prior to SoMI. And I guess the same approach has been re-invented by the console game creators even earlier.

And of course I can’t go past multimedia. I wrote a post about Opus, how similar CELT design is to G.722.1 especially to its version in RealAudio (mind you, it’s still innovative but not as much as you’d expect it to be); I have not explored it further but overall Opus design resembles USAC a lot and I don’t remember hearing anything explaining that “coincidence”.

Another thing is that the only open-source multimedia player that really came close to “plays it all” was the one released last century and it was called XAnim. I mentioned it before and it deserves to be mentioned again (and again). It had everything you don’t still have e.g. in VLC: frame stepping forward and backward, support for most of the formats of the day (and I suppose a good deal of them were reverse engineered by Mark Podlipec himself; I still sometimes find to my surprise that some obscure format like DL were supported by it). And for certain formats he could not support in open-source form he actually managed to negotiate and release closed-source plugins. For a very long time it served as the source of decoders for such projects as MPlayer or FFmpeg. Even more, its Indeo decoder plugins often were the only way to decode Indeo 4 and 5 on non-x86 Unix systems. After looking at it all achievements from other open-source multimedia players do not look that impressive. And yet its site is gone and it has not got a Wickedpedia page…

Moving on, there’s an “advanced” russian operating system developed by one guy which sounded revolutionary (flat address space encompassing both RAM and disk storage, persistence, inherent security achieved by allowing only interactions between objects). You may believe it unless you actually know a bit of computing history and start asking questions like “why does this persistence and object interaction sound almost exactly like Smalltalk environment?” or “why does this idea of really flat memory space sounds almost exactly like that OS from old IBM mainframes?”. The question “why did he decide to remove all mentions of KeyKOS from the documentation?” does not require an answer IMO.

And for the end let’s talk about a certain display system. Back in the day some people were dissatisfied with X11 and decided to make a modern replacement. That windowing system was supposed to use hardware acceleration where possible, object graph to manage input from different clients (and isolate them from each other), industry-standard ways of passing messages and such. That sounds a lot like Wayland? But that’s not it, I was talking about Berlin (it appears to be memory-holed; the rather generic name does not help searches either. The sad thing is that one of the developers haven’t learned anything from it and later created a programming language with too generic name—see the other repositories of that guy I linked if you still have no idea what I’m talking about).

Why I wrote this post? Definitely not to create a “top N innovative things that were invented elsewhere much earlier”. And it’s not to shame people (let alone companies—those have no shame by definition) either. I was just trying to remind the world that you should take even wide-known claims with a grain of salt since history is written by winners and the voices of those who lost the competition are often get forgotten or remain unheard. So maybe next time you hear about how great, innovative and even revolutionary something is, think about the proof of such claim beside the claimant words themselves.

FFpropaganda

Saturday, November 1st, 2025

Recently this tweet was brought to my attention (by somebody else who saw it and found hilarious). While I agree with the message (I also find those OMGsecurity people annoying and rather counter-productive), the premise is bullshit, namely those two lines:

Arguably the most brilliant engineer in FFmpeg left because of this. He reverse engineered dozens of codecs by hand as a volunteer.

So I’ll try to explain what’s wrong with that claim and FFmpeg twitter account (or FFaccount for short) in general.
(more…)

Why wavelets is a bad choice for image coding

Thursday, October 30th, 2025

I’ve been teasing it for too long and finally the time for this post has come. I really have a reason to believe that wavelets are not the best choice for image compression, so here are my arguments.
(more…)

The most horrifying town in Germany

Sunday, October 26th, 2025

I never had a reason to care about Halloween, and nowadays real world news are more scary than any imagined horrors. Yet I remembered one of my old trips so why not mention that curious fact.

There’s a town somewhere between Frankfurt and Frankfurt-lowcost airports named Oppenheim. People coming from there are obviously known as Oppenheimer, as well as their descendants and people marrying their descendants like the famous Lillian Oppenheimer (I know her as the person who popularised origami but apparently she’s a mother of several well-known mathematicians as well) and some guy claiming to become death, destroyer of the worlds.

But the town itself is more sinister, even if you disregard its catacombs. There’s Frankensteiner Hof—or residence of Frankenstein (maybe the descendant moved to Bavaria and got famous there). As for real monsters, just around the corner from the previous landmark they have a street named simply Zuckerberg—no suffixes for street or alley at all, just Zuckerberg.

It’s much better on the other side of Hessen—in Bavarian Miltenberg they have plainly named Hauptstraße (meaning simply “main street”) and parallel to it runs Mainstraße, specially for the foreigners who don’t understand German.

A small rant about compression

Wednesday, October 8th, 2025

The recent news about OpenZL made me think about some tangential issue.

The approach by itself is nothing new really, a lot of archivers include pre-processing step for data (I don’t know if there are an earlier examples, but de-interleaving or delta-coding floating-point data might be only slightly younger than geo file in the Calgary Corpus, LZX includes translating call addresses into absolute offset for better compression etc); more advanced archivers implement flexible processing steps (e.g. RAR had its own custom VM for pre-processing data which was essentially a security nightmare cut-down 8086 instruction set, and ZPAQ which allows to define compression steps for data-specific compression that won’t require a new decoder—in other words, something very similar to OpenZL). There’s nothing wrong with the approach and it’s probably useful outside, say, genomic data compression, it’s just it raises two questions: what is the current general compression/resources accepted trade-off and what would be a good candidate for an open-source archiver?

The first question is obvious: with the times the available CPU power and RAM grows along with the amounts of data to compress. Back in the day gzip was the golden standard and bzip2 was something eating too much RAM and worked rather slow. A bit later .tar.bz2 started to replace .tgz for, say, distribution tarballs. Nowadays it’s .tar.xz or .tar.zstd, which makes me wonder if it’s really the sweet spot for now or if things will move to adapting a compression scheme that’s slower but offers better compression ratio.

The second question follows from the first one: what would be a good candidate, specifically for open-source applications. If you look around, there are not so many of those. You can divide existing formats (don’t confuse them with implementations) into several (sometimes overlapping) categories:

  • proprietary formats with an official open-source decoder at best (like RAR) or unofficial reverse-engineered one (e.g. RAD mythical sea creatures formats and LZNIB);
  • open-source compression libraries targeting fast compression (LZO, LZ4, FLZ, LZF, etc, etc);
  • old open-source compressors (compress, gzip, bzip2, zip);
  • various programs trying to bank on well-known name while not being related (bzip3 and anything with “zip” in its name really);
  • state-of-the-art compressors that require insane amounts of CPU and RAM (anything PAQ-based, NNCP);
  • corporate-controlled open-source formats (brotli, Zstandard).

The question is what would be a good candidate for the next de-facto compression standard. The current widespread formats are good since they’re easy to implement and there are many independent implementations in various languages, but how much can we trust the next generation—the one with flexible input pre-processing (the third question would be if that’s really the design approach mainstream compression formats will take).

For instance, I have nothing against LZMA but considering that its author is russian how much can we trust that he won’t be visited by FAPSI representatives and forced to make changes in LZMA3 design that will make Jia Tan green with envy? As for the formats coming from corporations, are you really going to rely on their goodwill? I think the story with LZW should serve as a warning.

The only reassuring thing is that it is still rather easy to design a new compression scheme and even achieve decent compression ratio and performance (unlike training a neural network or even designing a video codec to rival H.265), so good candidates are likely to appear sooner or later.

“AI” is not there to help you

Thursday, October 2nd, 2025

I’m not writing this post to convince somebody, I write it mostly to formulate my thoughts and so I can refer to it later saying “called it”.

First of all, what do I have against AI and why the first word of the title is in quotes? Not much, actually, it’s just what gets hyped as AI nowadays is far from it—hence the quotes. It can do something, sometimes it can do it good—but in general it is far from being intelligence.

IMO it’s more accurate to call it artificial managers, since they do what your typical manager does: spew completely meaningless bullshit, take your work and reword it in corporate-speak way, plagiarise somebody’s work and take credit for it. Also maybe it’s acceptable for typical USian not to ever learn, but normally it is expected from human to keep learning and re-evaluating things throughout whole life. Of course I’m no AI scientist (and so my opinion does not matter) but I believe that proper AI should have two feedback loops: an inner loop that controls what is being done, and an outer loop that adjusts knowledge based on the new experience. Inner feedback loop means that while executing the task you’re trying to understand what you got, how it relates to the goal, and then you adjust what you’re doing if necessary. It’s like in a famous joke about the difference between physicists and mathematicians being asked to boil a kettle when it’s full and on the oven already: physicist will simply light a match and light fire, mathematician will take that kettle off the oven and pour water out, thus reducing the task to the well-known one. Outer feedback loop means learning from the experience. For example, LLMs apparently still make the same mistake as small children on answering what is larger, 4.9 or 4.71; unlike small children they don’t learn from it, so next time they will give the same answer or make the same mistake on some other numbers. I reckon implementing both such loops is feasible even if the inner loop will require a magnitude more of resources (for reverse engineering its own output, calculating some metric for deviation from goal and re-doing it again if needed), the outer loop is much worse since it would mean going over the knowledge base (model weights, whatever) and adjusting it (by reinforcing some parts and demoting or even deleting others).

So if I believe it can be improved why I claim it’s not helpful? What I’m saying is that while in current state it still may be useful for you, it is not being developed to make your life easier. It should be obvious that developing such system takes an enormous effort—all the input data to collect and process let alone R&D and learning control—so it’s something that can be done only by a large community or a large company (often stealing results of the former). And companies do something not to advance human well-being but rather to get profit, “dishonestly, if we can; honestly if we must” (bonus points for recognising what sketch this quote is from). I consider the current situation to be a kind of arms race: somebody managed to convince somebody that AI will be an ultimate solution, so the company that gets first practical solution will get an extreme advantage over competitors—thus current multi-billion budgets are spent mostly on fear of missing out.

What follows from the fact that AI is being developed by large companies in pursuit of commercial interests? Only that its goal is not to provide free service but rather to return investments and make profit. And profit from replacing expensive workforce is much higher (and real) compared to what you might get from just offering some service to random users (especially if you do it for free). Hence the apt observation that “AI” takes creative (i.e. highly-paid) work instead of house chores while people would rather have it the other way round.

As the result if the things go the way the companies that develop AI want, a lot of people will be rather superfluous. There will be no need for the developers, there will be no need for people doing menial tasks like giving information, performing moderation and such (we can observe that even now to large extent). There will be no reasons for those free-to-play games either as non-paying players there are just to create background for whales (called so because they spend insane amounts of money on the game). Essentially the whole world will be like Web of Bullshit with people being rather a nuisance.

Of course it is just an attempt to model how events will develop based on incomplete data. Yet I remain an optimist and expect humanity to drive itself to an early grave before AI will pose any serious threat.

MVI2: done

Thursday, August 14th, 2025

I’m almost done with Motion Pixels at last. Of course I skipped implementing some exotic modes but at least the files I could find play fine and don’t complain about missing modes. I just need to put finishing touches and commit it all, probably on Saturday.

The next post should be dedicated to intricate details of the codec(s) and comparison to its better-known competitors with similar design (Truemotion 1/RT/2/2X and Indeo 2/3) but for now all I need to say that frames may be coded in several modes (RGB or YUV with one chroma sample per 2×1, 2×2, 4×2 or 4×4 block), some parts of it may be use low-resolution delta coding (with its own peculiarities depending on line number and sampling mode); and since that was not enough, they’ve added smooth delta coding mode (which also has its own peculiarities in low-resolution coding mode). And of course there’s single-field coding mode. And some features seem to be duplicated using different flags. Since I’ve not found any samples for most of them, I simply implemented basic modes, 4×4 YUV mode with lowres and all YUV modes with optional lowres and smooth delta coding (since movieCD samples seem to exercise them all).

The best part is that nobody cares. NihAV can’t be interfaced with MPlayer easily, discmaster.textfiles.com is not likely to change anything (e.g. files here are recognised as aviAudio type despite having video track and nihav-encoder being able to decode it just fine. Or BATH06.MOV—no link since it’s the only one in the database—which can be converted with the same tool but it’s not even recognised as QT MOV. So I don’t expect that MVI1/2 files will get a video track either.) And I never was Aware caring about the codec much, not having any content coded with it for starters.

Anyway, with this waste of time is over, so what’s next? While searching for the samples I’ve found a couple other MVI formats that may be good candidates for na_game_tool. There is a lot of janitorial work for NihAV as well (for example, I want to rewrite AVI demuxer—it’s one of the first pieces of code I implemented for the project and now I see that some things could’ve been done differently and better). And I’ve finally decided on a name for a new tool: na_eofdec (NihAV exotic/obscure formats decoder). Now all is left is to RE and implement enough formats for a release of both of those tools.

Don’t expect any of this happening soon though, I am lazy and work on it only when I’m in the mood. For example, this post might’ve been about why wavelet compression for video (and images) inherently sucks—but I still haven’t got in a proper mood for writing it.

Strata, or yet another reason for not living in the USA

Tuesday, July 8th, 2025

Disclaimer: this post is not about politics at all, but rather about my personal reason.

Here’s the story that finally made me realise why USA is not for me (beside many other reasons that have something to do with my tastes. And don’t label them as sour grapes—I got some job offers from there back in the day yet I rejected them in favour of Europe).

I like to spend week-ends and holidays travelling around. Before 2020 I liked to travel somewhere far and travel around that distant point, now I can take only local travels (for health considerations). Last Sunday I decided to visit Seligenstadt but thanks to Deutsche Bahn I missed the connection and had to wait for almost an hour at Hanau. Since I had nothing better to do, I decided to take a walk there and was shocked.

The town turned out to be not merely Grimm (being the birthplace of the famous brothers) but also grim and soulless. Essentially all it had to offer in historical buildings was its town hall and a church (maybe I could discover more but probably not in the town centre and definitely not in the time I had). You don’t need to read any documents to guess that Hanau was heavily bombed during WWII and nobody bothered to reconstruct it (it’s much more important to have a bank and an airport after all).

And then I saw Seligenstadt, which is a complete opposite, with a preserved historic centre next to the more than millennium-old monastery. That’s what made me realise that I can’t live in a space without history comfortably. And USA is exactly a country that is rather poor in that aspect (compared even to Mexico). For comparison I consider my home city rather young—and yet it had been founded before New England royal colonies were chartered let alone USA as a country appeared on the maps. That is why I subconsciously liked Europe; probably other things I like about Europe (like food) also have roots in its rich historical soil. Similarly probably a lot of things I dislike about USA also come from its lack of historical soil (again, like food).

Of course other people don’t care about such things, which means less competition for them from my side and vice versa.

P.S. In case it was not obvious, this post name comes from the early Pratchett’s novel Strata, where one of the ideas was that newly terraformed worlds also included specially-crafted fossils in different geological strata—because humanity does not feel right living on a planet without history. It turned out to be true at least for me.

News and foam

Friday, July 4th, 2025

…the knowledge of certain principles easily compensates the lack of knowledge of certain facts.

Claude Adrien Helvétius, De l’esprit (1758)

Today I want to rant about a different aspect of the world. There is a constant stream of what is called news every minute, but if you take a closer look at it most of those pieces of news are not worthy of any attention. That is why I distinguish news—pieces of information about events that affected something—and foam—derivative pieces that bring no useful information, taking more volume than the original news and quite often used to obscure the original source. If you have a suspicion that it applies to other produced content (like “X reacts to Y” videos) then you may be right.

Anyway, suppose there is some planned event X. Usually the reports related to it will go like this:

  1. X is going to happen!
  2. random unrelated person comments on X;
  3. famous self-proclaimed expert estimates impact of X;
  4. X is scheduled to happen in next couple of days;
  5. X happens;
  6. the administration comments upon X success/failure/whatever;
  7. random unrelated person comments on X;
  8. random tangentially related person comments on X;
  9. aftermath of X.

It should be obvious that #5 is the real piece of news, with #1 and #4 having some importance (and #9—but only in the case when that event had unforeseen consequences). The rest is just a filler for news feed (those ad views won’t generate themselves, you know). This may be done to keep interest to the topic itself, but then it’s propaganda and not really news.

The statement from the epigraph can be applied to the news as well: if you know how it normally goes you don’t need to follow the news. Here’s a typical example for news I care about: russia commits umpteenth war crime (the fact by itself is no news, it’s circumstances that make it news); Ukrainian president / minister of foreign affairs / other official condemns it (that’s what they always do so it’s no news); some European official expresses condolences (still no news); russia celebrates and boasts how it hit important military target (which almost every time is a civilian infrastructure—tenement house, post office, hospital and such; but russians lying is no news either); USian administration trying their best to ignore the fact that russians did it (if you haven’t spotted the pattern, it’s still no news). There may be some follow-up pieces of actual news eventually (rescuers finishing operation, new bodies discovered, some victims of the attack dying at the hospital, a local mourning day being declared if the victims count is too high) but they do not add much to the picture.

Similarly news from the USA are rather irrelevant if you know a couple of things that have happened recently: USians elected a chaotic president, who decided that it’s time to cash on all the goodwill USA has been building since 1940s; his favourite tool is tariffs; his team consists mostly of people picked for their loyalty and not intellect; after events of 2020 he decided that the system of checks and balances hinders him and should be dismantled. So every time I see something about his administration violating the law with no repercussions, members of it proving themselves incompetent with no consequences, tariffs being declared and/or imposed on some random country and then waived again—those are things to be expected. Even the split of two greatest lovers on Earth was only a question of time—and when it happened was a real piece on news, unlike what they wrote about each other in their own social networks (it may be interesting to the future historians and current stand-up comedians though). And if you remember the phrase “Will no one rid me of this turbulent priest?” then subordinates acting without explicit president’s order is no news for you either.

Similarly global EU news are non-existent if you remember that important decisions require consensus—and there’s Hungary (and Slovakia time from time) using its veto power to extract benefits (from both EU and russia, and occasionally China). And of course member countries not willing to spend money on infrastructure and defence are no news either.

In conclusion I want to say that while thinking hurts, it can still save you time. Sometimes important news happen, but mostly you don’t even need to scan news headlines that thoroughly.