Archive for October, 2025

A pair^Wtrio of exotic formats

Tuesday, October 14th, 2025

If it looks that I’m not doing anything, that’s about right. Nevertheless I’d like to discuss two exotic formats that I’d like to write decoders for.

The first one is unlike most of the video codecs I’ve seen so far. For starters, it uses fractal compression. Not surprising since it comes from Iterated Systems. And unlike later ClearVideo, it is really a fractal codec. From what I see, it works exactly like the textbook example of the fractal compression: split video into small fixed-size blocks, search for a domain block, apply simple affine transform on scaled-down version of it plus brightness scaling and output the result. There are additional possible operations like leaving blocks unchanged or reading raw data for a block. Since this works only for the greyscale, frame is stored in YUV420 format, planes coded sequentially. Unfortunately since the binary specification is mixed 16/32-bit VfW driver that Ghidra can’t decompile properly, the work on it goes at glacial speed.

The other codec is like the previous one but it has its own container format and DOS player. It comes from TMM—not The Multimedia Mike but rather the company known for RLE-based PH Video format. I don’t see mentions of Iterated Systems in the binary specification, but considering how similar this FRAC codec is to theirs (it uses the same bitstream format with the same opcode meanings and the same assembly instructions) I expect they’ve licensed it from Iterated Systems.

So hopefully when I actually finish it I’ll have two decoders for the price of one.

Update: while refreshing the information about fractal compression, I discovered in the Wickedpedia article on it that two companies claimed they got exclusive license for fractal compression algorithm from Iterated Systems—TMM and Dimension. The last one licensed it to Spectrum Holobyte to be used for FMV. And what do you know, that explains why FVF is named so and why its video bitstream syntax is the same as in the other two (and the code seems to be the same too). So I guess it means I’ll have almost the same decoder (but with different containers) in NihAV, na_game_tool and na_eofdec.

A small rant about compression

Wednesday, October 8th, 2025

The recent news about OpenZL made me think about some tangential issue.

The approach by itself is nothing new really, a lot of archivers include pre-processing step for data (I don’t know if there are an earlier examples, but de-interleaving or delta-coding floating-point data might be only slightly younger than geo file in the Calgary Corpus, LZX includes translating call addresses into absolute offset for better compression etc); more advanced archivers implement flexible processing steps (e.g. RAR had its own custom VM for pre-processing data which was essentially a security nightmare cut-down 8086 instruction set, and ZPAQ which allows to define compression steps for data-specific compression that won’t require a new decoder—in other words, something very similar to OpenZL). There’s nothing wrong with the approach and it’s probably useful outside, say, genomic data compression, it’s just it raises two questions: what is the current general compression/resources accepted trade-off and what would be a good candidate for an open-source archiver?

The first question is obvious: with the times the available CPU power and RAM grows along with the amounts of data to compress. Back in the day gzip was the golden standard and bzip2 was something eating too much RAM and worked rather slow. A bit later .tar.bz2 started to replace .tgz for, say, distribution tarballs. Nowadays it’s .tar.xz or .tar.zstd, which makes me wonder if it’s really the sweet spot for now or if things will move to adapting a compression scheme that’s slower but offers better compression ratio.

The second question follows from the first one: what would be a good candidate, specifically for open-source applications. If you look around, there are not so many of those. You can divide existing formats (don’t confuse them with implementations) into several (sometimes overlapping) categories:

  • proprietary formats with an official open-source decoder at best (like RAR) or unofficial reverse-engineered one (e.g. RAD mythical sea creatures formats and LZNIB);
  • open-source compression libraries targeting fast compression (LZO, LZ4, FLZ, LZF, etc, etc);
  • old open-source compressors (compress, gzip, bzip2, zip);
  • various programs trying to bank on well-known name while not being related (bzip3 and anything with “zip” in its name really);
  • state-of-the-art compressors that require insane amounts of CPU and RAM (anything PAQ-based, NNCP);
  • corporate-controlled open-source formats (brotli, Zstandard).

The question is what would be a good candidate for the next de-facto compression standard. The current widespread formats are good since they’re easy to implement and there are many independent implementations in various languages, but how much can we trust the next generation—the one with flexible input pre-processing (the third question would be if that’s really the design approach mainstream compression formats will take).

For instance, I have nothing against LZMA but considering that its author is russian how much can we trust that he won’t be visited by FAPSI representatives and forced to make changes in LZMA3 design that will make Jia Tan green with envy? As for the formats coming from corporations, are you really going to rely on their goodwill? I think the story with LZW should serve as a warning.

The only reassuring thing is that it is still rather easy to design a new compression scheme and even achieve decent compression ratio and performance (unlike training a neural network or even designing a video codec to rival H.265), so good candidates are likely to appear sooner or later.

“AI” is not there to help you

Thursday, October 2nd, 2025

I’m not writing this post to convince somebody, I write it mostly to formulate my thoughts and so I can refer to it later saying “called it”.

First of all, what do I have against AI and why the first word of the title is in quotes? Not much, actually, it’s just what gets hyped as AI nowadays is far from it—hence the quotes. It can do something, sometimes it can do it good—but in general it is far from being intelligence.

IMO it’s more accurate to call it artificial managers, since they do what your typical manager does: spew completely meaningless bullshit, take your work and reword it in corporate-speak way, plagiarise somebody’s work and take credit for it. Also maybe it’s acceptable for typical USian not to ever learn, but normally it is expected from human to keep learning and re-evaluating things throughout whole life. Of course I’m no AI scientist (and so my opinion does not matter) but I believe that proper AI should have two feedback loops: an inner loop that controls what is being done, and an outer loop that adjusts knowledge based on the new experience. Inner feedback loop means that while executing the task you’re trying to understand what you got, how it relates to the goal, and then you adjust what you’re doing if necessary. It’s like in a famous joke about the difference between physicists and mathematicians being asked to boil a kettle when it’s full and on the oven already: physicist will simply light a match and light fire, mathematician will take that kettle off the oven and pour water out, thus reducing the task to the well-known one. Outer feedback loop means learning from the experience. For example, LLMs apparently still make the same mistake as small children on answering what is larger, 4.9 or 4.71; unlike small children they don’t learn from it, so next time they will give the same answer or make the same mistake on some other numbers. I reckon implementing both such loops is feasible even if the inner loop will require a magnitude more of resources (for reverse engineering its own output, calculating some metric for deviation from goal and re-doing it again if needed), the outer loop is much worse since it would mean going over the knowledge base (model weights, whatever) and adjusting it (by reinforcing some parts and demoting or even deleting others).

So if I believe it can be improved why I claim it’s not helpful? What I’m saying is that while in current state it still may be useful for you, it is not being developed to make your life easier. It should be obvious that developing such system takes an enormous effort—all the input data to collect and process let alone R&D and learning control—so it’s something that can be done only by a large community or a large company (often stealing results of the former). And companies do something not to advance human well-being but rather to get profit, “dishonestly, if we can; honestly if we must” (bonus points for recognising what sketch this quote is from). I consider the current situation to be a kind of arms race: somebody managed to convince somebody that AI will be an ultimate solution, so the company that gets first practical solution will get an extreme advantage over competitors—thus current multi-billion budgets are spent mostly on fear of missing out.

What follows from the fact that AI is being developed by large companies in pursuit of commercial interests? Only that its goal is not to provide free service but rather to return investments and make profit. And profit from replacing expensive workforce is much higher (and real) compared to what you might get from just offering some service to random users (especially if you do it for free). Hence the apt observation that “AI” takes creative (i.e. highly-paid) work instead of house chores while people would rather have it the other way round.

As the result if the things go the way the companies that develop AI want, a lot of people will be rather superfluous. There will be no need for the developers, there will be no need for people doing menial tasks like giving information, performing moderation and such (we can observe that even now to large extent). There will be no reasons for those free-to-play games either as non-paying players there are just to create background for whales (called so because they spend insane amounts of money on the game). Essentially the whole world will be like Web of Bullshit with people being rather a nuisance.

Of course it is just an attempt to model how events will develop based on incomplete data. Yet I remain an optimist and expect humanity to drive itself to an early grave before AI will pose any serious threat.