Deceivingly simple

There are several approaches to designing the architecture that may reasonable at a quick glance but prove to be rather bad in the long run. One of those is monolith design (where the software is an interconnected mess where you cannot isolate any part without breaking half of it) and its variation panelák, a monolith that looks like it has modular structure but in reality those modules are often connected by undocumented interfaces and removing one of those may break the rest—just look at systemd, a modern init system that for some reason has to reimplement various previously stand-alone services as its components in order to function properly (like network management).

But recently I’ve noticed another, rather opposite, design pattern that nevertheless leads to equally bad outcome: having a simple but rather useless core that anybody can implement but won’t be satisfied with, which leads to creating a lot of (often competing) extensions that rarely get supported by the majority, ending up with a mess that you can’t rely upon. I think the appropriate metaphor here would be amoebas: something with a small core, amorphous body, cloning itself and the resulting clones may end up very different from each other over time (and detrimental to your health).

The first example would be Wayland. It’s just a protocol that allows apps to draw graphic data on a screen by sending messages to a compositor so it’s not that hard to implement. So when the core protocol was found to lack certain functionality, various desktop environments simply started to introduce their own extensions to work around the lacking functionality caused by core design (like screen sharing or window position). So you can never be sure if the feature you want will work. To be fair, X11 protocol has similar problems with the extensions but since everybody uses about the same single implementation it’s less of an issue there.

Another example is lossless audio compression in general. It’s too easy to create a working lossless audio codec (just predict the next sample and code the difference, there are several conventional ways to do each step so you really need just to tune some parameters instead of inventing anything) so such codecs appear every year if not more frequently (and that’s only the ones I’m aware about). I vaguely remember a post on HydrogenAudio telling a story about how the popular lossless audio formats came to life; to put it shortly, each preceding one had some Fatal Flaw™: closed source code, lack of Windows builds, lack of decoder plugins for some popular players and so on. Of course FLAC, Monkey’s Audio and WavPack have different internal design but they represent the main three groups of audio codecs quite well (fast codecs built on LPC and universal integer codes, high-compression codecs using long adaptive filters with arithmetic coding, and codecs packing many various features into one package correspondingly).

Then there’s LISP (or should it be pronounced LIthP?). In theory it’s a simple language where everything is a list, including the program itself. In practice the fact that such thing as Common Lisp exists should hint you about the existence of many dialects of the language. And there’s a myriad of them indeed, from Scheme and Guile to Closure and whatever “original language with S-expressions do not steal” experiments that appear every day. The problem is the same: the language is so easy to implement that textbooks may include (eval ...) implementation (at least the one I read long time ago had it) which is essentially how to write language interpreter in itself (and if one story does not lie, initially LISP was translated to machine code by hand so this allowed developers to create first compiler for it almost for free). And then different groups of people had different ideas about what to add there or how some operations should be named and here we are in a maze of dozens LISP dialects all alike but slightly dissimilar (just read this introduction for EMACS LISP if you don’t believe me).

And finally, RISC-V. It’s not a software product but it demonstrates the same approach: simple core that any team with some hardware designing experience can implement with ease but in order to be used as something more serious than a microcontroller it needs to implement some extensions. Of course it’s not as bad as there’s a central authority codifying those extensions but there’s still a zoo of RISC-V implementations with different extensions present (and in some cases with different versions of the same extension but I ranted about V extension more than enough times) so you can never be sure if the program compiled for one flavour of RISC-V CPU will run on another.


As you can see, this design (anti)pattern is neither new nor limited to a small niche. There are two ways to prevent it, realistic and unrealistic: you can either think about the design beforehand in order to make it future-proof at least to some degree and address at least some requirements that may arise in the future (from the initial lack of some features)—while making the architecture of the project simple and clear but not too simple in one-size-fits-all fashion; or you can exert control over the specification and implementation—this issue can’t arise without several independent implementations. The latter is the realistic one obviously, even if there’s a risk that your design starts competing with itself (see Python2 vs. Python3).

In either case, I’m simply pointing out at the problem, it’s up to you how to deal with it.

2 Responses to “Deceivingly simple”

  1. DunkinDonuts says:

    TL;DR: World is imperfect place to be. Sure tt is. And then this article woefully lacks examples of good designs. Or any hints on how to get on right track. Just saying “everything sucks” doesnt really helps much.

    And then – its overall set of properties that matters. Say X11 “haves less problems” on compat level? Maybe. But on other hand it ended wuth UTTERLY AWFUL implementation, which is almost irreplaceable, unmaintenable, and its code is a smouldering pile of horrible legacy, more scary than death itself. It came to point no coder dares to venture into this code – so it been left to rot, almost abandoned. There’s simply no future left for code like this. So while it “less for problem” on one dimension, its utterly fatal flaw in another one. That’s what killing X11 here and now – and Wayland is different in fact at least there’re some alive coders who are willing to sort problems out. This alone dubs as huge advantage. X11 got tons of long standing issues and quirks – and about ZERO people who would like to chew on it. That’s where project as whole hits its roadblock and no longer viable at all.

  2. Kostya says:

    You mostly miss the point of my rant. It’s not about whether X11 protocol is better and deserves to live (the issues with the protocol design, implementation and extensions are real), it’s not about X11 vs Wayland either, it’s about one specific issue of having too simple core design that anybody can implement and fork in slightly incompatible way.

    I’m pretty sure that most of the problems Wayland has can be resolved, but I fear that they will be resolved in slightly different and incompatible ways in each implementation. And that’s the consequence of the original design flaw (which is not Wayland-specific at all). We’ll see how it all settles out.

Leave a Reply