Archive for the ‘Useless Rants’ Category

Things I want in Ghidra

Wednesday, April 14th, 2021

Ghidra, as you should know already, is a disassembler and decompiler for unprofessional folks who can’t cough up a couple thousands dollars and pass background check to buy a disassembler and decompiler the experts use. And here’s a list of things that would make Ghidra better or much better for me. I know it’s opensource but I’d rather not touch large codebases written in Java (or any codebases written in Java really). Disclaimer: by the time of writing this I’m still using Ghidra 9.2 even if 9.2.2 has been available for months, but I doubt any of the things mentioned here are implemented already.

First of all, there’s a bug with x86 disassembly: while rep movsX is recognized and works fine, some of the code I’ve seen uses repne movsX which is treated as simple movsX even by disassembler. Initially I was confused by this bug and only disassembling instruction bytes with ndisasm proved that it’s not a compiler (or assembly author) missed a prefix but rather Ghidra ignoring it entirely. Some other rarely seen instructions that involve FPU operation and an addressing with segment registers (e.g. ES, FS or GS; if you don’t have an idea what those are for—be thankful for that) and explicit offsets are disassembled incorrectly, consuming a byte more than required (and thus making the following instruction to be disassembled incorrectly as well).

And speaking about assembly, current program text search is nice since you can search inside a specific part of an instruction (e.g. MOVSD.REP in instruction name or 0x800 in its operand) but it would be even better to have a more generic search by a regular expression. Quite often I want to locate a specific instruction doing e.g. shift left by five. The problem is that there are many shift instructions and even more instructions with an operand having 5 somewhere in it. And I don’t know the exact operand register so searching for shl eax, 5 first, then shl ebx, 5 and all the way to shl dh, 5 is tedious. The same can be said about dumping listing and searching there. It will work as intended though.

Beside the issues above and idiosyncratic x86 assembly syntax (it does not bother me much though) I have nothing else to complain in disassembler, so let’s move to the decompiler issues.

I suspect that decompiler output is not stored permanently, but it would be nice to mark some function as being “decompiled, it’s fine, do not touch it, I mean it”. Looks like the process of function being decompiled again and again even if you change something not related to it in any way is annoying not just to me. So it would be nice to mark some large decompiled function as permanently decompiled so it’s not re-decompiled on a subsequent visit to it.

And speaking of functions, it would be nice if functors (aka function pointers) would be supported instead of just detecting that this variable is a pointer to function. When arguments are passed by stack, decompiler usually can detect that. But when arguments are passed in registers you have to trace the registers and their values by hand. Of course you’d need a monstrous syntax to specify a type for e.g. a function that accepts three arguments in designated registers but I can still wish for it, can’t I?

Another thing I often wish for is being able to tell decompiler that after a certain point the variable is no longer valid and it should treat subsequent uses of that register or stack as a new variable. A very common example is when a first argument (usually some context pointer) is moved to a register (or it is passed in a register already), some fields are read from the context, context value is stored to some local variable and the initial register is used for example as a loop variable that gets decompiled to something monstrous like for (ctx = (Context*)0; ctx < (Context*)42; ctx = (Context*)((int)ctx + 1)) { ... } and it also screws types for variables involved in the same expression as this loop counter.

Of course not all of these things can be easily implemented, and maybe some of them would require architectural changes. But I prefer to cherish my ignorance on Ghidra internal details and just point out what I’d find good to have in principle.

I feel old

Sunday, April 4th, 2021

Probably it’s exactly when you start complaining about things changing you realize you’ve become old. And as you can guess this is my turn to complain about things changing.

The last tipping point for me was when I tried to update rustc from version 1.33 I’d been using. I wanted to do it mostly for three things: atomic types beside bool and usize (for the use in NihAV video player that should be written eventually), replacing now deprecated mem::uninitialized(), and being able to switch to a newer version of SDL2 bindings for the future video player (because old version depends on about 25 crates more than a newer ones).

So I tried latest-and-greatest rustc 1.51 and ran cargo clippy to see what has changed meanwhile. And beside seeing that now there’s a special macro matches!() as a convenient shorthand for checking object type to be some variant(s) I could not see anything useful because the linter spammed everything with “selected name is bad, you should change NASomething into NaSomething“. Which looks ugly and dropping the NA prefix entirely may make names too generic and collide with the standard ones. And before you tell me: I’m aware of the lint suppressor flag and that there’s a fix for it that makes it not to complain on public names. Still, I find this brain-damaged and thus I’ve settled on rustc 1.46 for now.

Then there’s an even funnier thing. I’ve discovered that crates.io (the site for navigating Rust packages) has stopped working in Firefox 52 (docs.rs still works fine—for now). Probably because of some JavaScript new feature (which is invoked inside rather large bundle) the scripts cannot be parsed and it simply gives you a cryptic message “Sorry, it looks like we were not able to load the page.” without explaining much. The problem is most likely at your side, you find it out.

My friend Luca actually wasted some time to create an issue for that and got an obvious reply that the browser is outdated and this is expected. For the record GitHub at least prints a message that the browser version is too old and not supported (yet the main functionality is still working), the same is true for WordPress instance where I’m typing this post.

So why don’t I update this four-year old browser? Because it’s painful. The browser is tied to the Linux distribution so either I should compile it myself (I tried that once with earlier Firefox, ran out of disk space and almost of swap space too; so I’m not eager to compile any browser more complex than elinks). The other alternative is updating the distribution and that is even more painful because of the drastic changes in software.

I’m not talking about desktop environments per se even if they expose the problem. I don’t care much about how the windows look like or where launch menu is located. But I do care about the programs I used to being no longer the part of the distributive or changing their interface in radical way. Simple example: for various reasons I like to have several input layouts and methods (for English, Ukrainian, Russian and Japanese language). On Ubuntu 12.04 I have keyboard layout switching between English and Ukrainian layouts (the latter can also produce Russian or Belarussian letters with AltGr) plus mozc to input Japanese (previously it was Anthy and who knows what will be the default Japanese IME in 2024). On Ubuntu 18.04 (which I use on a different machine) you can’t set it up the same way, Ukrainian layout does not have AltGr support and mozc by default outputs Latin characters with no setting or key combination to make it output kana by default (of course I can simply press kana mode change key on my Japanese keyboard—but that means connecting an external keyboard just for that). And even if my fingers are the wurst I have sausage fingers hardly hitting a correct key on the first time, I still want to perform keyboard actions using keyboard.

The sad thing is that I somewhat understand why this happens. Web sites get bloated and do not work with older browsers because of the browser war (no plural, you know The Browser) and shiny features developers are eager to try (and testing for older versions takes time and efforts better spent elsewhere—if people remember about it at all). Some programs need to be updated because of security issues (e.g. I upgraded from Ubuntu 10.04 mainly because of TLS troubles leading to programs not being able to open half of the sites over HTTPS). Some programs get replaced because the maintainer left and there’s nobody to step in. Some interfaces need to be adapted to the new reality (while Debian on Nokia smartphones is not so popular now, Ubuntu on tablets seems to be the next popular thing). And there’s GNOME and freedesktop.org which seem to be the main sources of disruptions. I can’t explain the logic by which they change things but it’s because of their view the mountpoint for network shares in the next release will probably be different from two previous places (and when you’re still using command line to access files on those shares like I do this feels annoying).

The sad thing is that all other non-toy OSes have the same problems. Windows users might still remember Windows Vista GUI affectionately and like how Windows 10 changes GUI elements time from time. *BSD seems to be more stable but they still support GNOME. macOS users might be still ashamed for their muscle memory failing them when OS update decided to reverse scroll direction. This is also why Linux kernel is still precious: it still cares about its interfaces being backward compatible so the userspace can rely on them while you can’t say the same about glibc 2.16.

And here we have it. As I said in the beginning this whining about programs and environments changing constantly mostly tells that I’ve become old. On the other hand if you think this “move fast and break things” motto is a good idea—remember who coined it and think again.

Why I don’t consider Monkey Island games to be the best adventure games

Monday, March 15th, 2021

Disclaimer: I try to write posts mostly about multimedia stuff but some things are nagging me for a long time. So I write this to get rid of that nagging and move along with my lack of life.

First of all, I need to say that even if grew up in a land that mostly favoured Sierra games, I don’t think low of Monkey Island series (the first three games for sure). The setting is interesting and fun, the graphics was good (for the first three games at least), the music is memorable (even if iMUSE system was under-appreciated), the characters became part of the culture. I’ve played them countless times (MI1 both floppy and CD versions too) and would probably play again soon. I agree that those are very good games but there’s one thing about them that prevents me from agreeing with the majority that MI 1 and MI 2 are one of the best adventure games of all time.
(more…)

Videos in Russian quests – FLIC and Indeo

Friday, February 19th, 2021

It should be obvious by the fact that in ex-USSR countries there are usually “quest games” or simply “quests” instead of adventure games that Sierra games were favoured way more than anything else (while Germany seems to be more Lucas Arts land). But unlike either of these companies, Russian quest games history is much more limited both in time and quantity.

There are just a couple of games that can be called Russian quests. They appeared in 1997-1998 and remembered dearly up to this day. Almost anything coming has been forgotten and for good reasons too.

First I’d like to give a short review of the games:
(more…)

Hopefully final stupid question

Saturday, February 13th, 2021

Since my previous question got a satisfying answer (even two) maybe I’ll get some answers for this question. It’s not been bugging me but it’s been with me for a long time.

First, the background. What we have:

  • there’s a significant demand for video editing software for all those content creators on various video hosting platforms;
  • there are some commercial products that got mentioned rather often (usually with complaints about their stability);
  • there are some freeware or shareware products (less mentioned but you can see them in paid advertisements) that are built on opensource technologies (many of those have libavcodec in some form, one even turned MEncoder into .dll form; I checked the programs specifically to see if they have some interesting decoders bundled with, no luck so far);
  • there are some open-source video editors as well like Cinelerra, Kdenlive or LiVES that keep existing in obscurity (AviSynth or Blender are more popular but they’re a different class of programs);
  • most of the building blocks for input/output formats support or filters are already there (and if not I’m pretty sure Paul B. Mahol can add more on request);
  • there’s a certain traffic cone shaped foundation (maybe VLC secretly stands for cône de Lübeck video) that is known for making an opensource video player that majority of people have heard of and use—on various platforms too;
  • beside the player (which can also stream and do other things) they have supported other projects (like x264 or dav1d).

So the question (or rather a tree of related questions) is: why can’t those people organise and make a popular open-source video editor? Are they not interested in a product with less then ten million potential users? Are there any principal issues in creating a video editor that I’ve not seen (I remember VLME which did not even look like a serious attempt)? Or maybe there are some ideological reasons preventing its creation?

To me it looked like it’s possible to build an opensource video editor, it should use mostly the same blocks as video player, it should have some demand and it is possible to make a popular open-source product. So why it is not done, what I’m missing in the picture?

…and silence will be the answer.

Another stupid question

Wednesday, February 10th, 2021

Here’s another thing that has been bothering me for some time. As we all know RDBMSes more complex than SQLite have data management that is more complex than simply writing all data into one file sequentially plus you have database users belonging to different roles with different permissions on different databases. And you’re supposed to connect it via the usual networking mechanism even if you’re running it on the same machine. And advanced features include multi-machine interaction like sharding, replication and such.

So the question is: if this all looks like a standalone operating system then why enterprise-grade RDBMS are running as applications instead of their own OSes build on some kernel to manage CPU cores, storage and network interface? And in the age of containers we don’t really need to have an app version even for legacy version, just replace the driver (or merely its settings).

But maybe like the last time I wondered why GPUs do not have raytracing, I’ll learn that it’s a trend already and I’m just oblivious to it.

Ghidra and DPMI

Sunday, January 24th, 2021

When trying to look inside an old game you might have a problem if the executable is 32-bit one running with some DOS extender.

Most people do not know (and those who know probably want to forget it) that beside the usual 16-bit COMs and EXEs (that can have several segments and sometimes an overlay too—an external file for loading code or data from disk) you can have 32-bit DOS applications with a very special interface to access so called protected mode. Essentially you have two executables bundled together: actual 32-bit executable in its own format and the loader for it (or a stub to call an external loader usually called dos4gw.exe). Kinda like PE format but more flexible.

So what’s the problem with loading them? For starters Ghidra does not recognize the extended format and loads just the loader (an external LX loader plugin did not help either; but when I tried to load some ActiveX components for XVD decoding, their format was detected as LX). Another problem is that some extenders (like the one from CauseWay) also compress the payload. Luckily enough there’s DOS/32A extended that is shipped with a utility to extract the so-called linear executable which you can’t run but you can study it.

And this is not hard but somewhat tedious since (at least in my cases) it can be loaded only as raw image. From the LX header you can extract the position of actual segment data, segment sizes and their base addresses; then you can use that information to create custom memory segments for them. Now you have some (usually two) segments and one problem: the segments do not address the full virtual address but rather an offset e.g. if you have code segment starting at 0x10000 and data segment starting at 0x90000 you might have mov eax, [0x4bc] which really should be mov eax, [0x904bc]. That is why the first hundred kilobytes or more of LX is dedicated to so-called “fixups” that tell you how you should correct the addresses at the specific places in loaded data (or code). Luckily Ghidra allows patching the addresses in the instructions and re-assigning segment base addresses to have less places to patch (and calls/jumps almost always use relative offsets so it’s not an issue).

And how to find those wrong addresses for specific data item to patch? Memory search for low word part of the address works just fine.

Overall, maybe it is not the most convenient way to deal with this format but it works sufficiently good for me. Who knows, maybe in the future I’ll look at more games for not yet covered formats. At least now I know how to do that.

Bink going from RAD to Epic

Friday, January 8th, 2021

So in the recent news I read about Epic Games acquiring RAD Game Tools and I guess I have to say something about that.

RAD was an example of a good niche company: it has been providing game makers with essential tools for more than quarter of century and offering new things too. Some people might remember their Miles Sound System and Smacker format from DOS times, some have heard about Bink video, others care about their other tools or recent general data compression library that even got hardware decoding support on PS5. And one of the under-appreciated things is that the developers are free to publish their research so you can actually read how their things were developed and improved. If that does not convince you of their competence I don’t know what can. (Side note: considering that you usually get useless whitepapers that evade describing how things work, the posts from Charles or Fabian are especially outstanding).

Since I’m no expert in business matters and lack inside knowledge I don’t know if it’s a good or bad step for a company and its products. Nevertheless I wish them good luck and prosperous and interesting future even if we have Electronic Arts to show us an example of what happens when a small company gets bought by a large game developer and publisher.

P.S. I would be grateful if they fill in missing details about Bink2 video but this is unlikely to happen and probably somebody caring enough about it should finish the reverse engineering.

NihAV: optimisation potential

Sunday, December 13th, 2020

Today I can say what I’ve wasted about two months on: it was H.264 decoder. For now it’s the only entry in nihav-itu crate but I might add G.7xx decoders there or even the standard H.263 decoder in addition to all those decoders based on it.

Performance-wise it is not very good, about 2.5-3x times slower than libavcodec one without SIMD optimisations on random BaidUTube 720p videos but I’ve not tried to make it the fastest one and prefer clarity over micro-optimisations. But this still has a lot of optimisation potential as the title says. I suspect that even simply making motion interpolation functions work on constant-size blocks would make it significantly faster let alone adding SIMD. In either case it is fast enough to decode 720p in 2x realtime on my laptop so if I ever finish a proper video player I can use it to watch content beside game cutscenes and few exotic files.

As for the features it’s limited but it should be able to play the conventional files just fine plus some limited subset of High profile (just 8-bit 4:2:0 YUV without custom scaling lists). A lot of features that I don’t care about were ignored (proper loop filtering across the slice edges—nope, weighted prediction—maybe later, high-bitdepth or different chroma subsampling format support—quite unlikely, interlaced formats—no in principle).

While developing that decoder I also got better knowledge of H.264 internals for which I’m not that grateful but that’s to be expected from a codec designed by a committee with features being added to it afterwards.

In either case hopefully I’ll not be that bored to do optimisations unless I have to, so the potential will remain the potential and I’ll do some more interesting stuff instead. And there’s always Settlers II as the ultimate time consumer 😉

Hamburger as the symbol of modern IT terminology

Wednesday, November 25th, 2020

As anybody knows, this American dish of non-American origin is named after Hamburger Frikadelle which means (minced meat) patty from Hamburg. And because Americans are known for their deep knowledge of other languages somebody decided that the first syllable is a separate word so the words like cheeseburger and simply burger were born (you can call it the American wasei-eigo if you like). Anyway, the same process of maiming words and giving them new meaning happens in IT as well, irritating those few who still remember the original word and its meaning.
(more…)