On the Origin of Bloatware

This is inspired by both a private discussion on why modern computing is so complex and my migration from Ubuntu 12.04LTS to systemd 20.04LTS.

Since I’ve finally changed from my less than ten years old operating system to something more modern I’ve noticed that it became noticeably slower (not irritatingly slower though but slower nevertheless) except for Firefox (which is probably not because of JS engine improvements but rather because of native execution of now supported APIs instead of polyfills). And trying various desktop environments before settling on Cinnamon I’m horrified by how bloated and unusable (to me) they are. My friends complain about modern technology demanding more effort to maintain because of complexity and weird interdependencies—while it’s supposed to make your life easier. So why it is like that?

For a keen reader the title of this post contains the answer. For the rest I’ll elaborate it below.

Of course it’s evolution of the programs. They grow, get more features and interfaces—and become more bloated as the result. Of course sometimes you have the opposite processed happening (programs losing features and sometimes even get split into several parts).

Here’s what I claim and try to prove with the examples:

  • programs are tied to the environment;
  • programs change over time, partly because of the environment;
  • programs tend to grow by collecting new features;
  • there’s not much you can do about it.

Programs are tied to the environment

By the environment I mean not just the operating system but also what other programs you run and the physical machines and users. Let’s start from the highest level and go deeper.

Users define programs because they run those programs according to their needs (hopefully). A quick example: back in the days programs for downloading a whole website to have its offline copy were rather popular. But they went out of popularity even before the sites became dynamic, it’s just the broadband access made it significantly less useful. Similarly, who remembers about external downloading programs that supported resuming downloading? Again, hardly any reason to care nowadays.

Machines define input/output and thus user interface of the programs. Plus various hardware limitations define how operating systems work and what programs can or can’t do.

If you work using a remote terminal to a mainframe connected via dial-up you probably won’t like to wait minutes for a new picture to render (let alone the fact your terminal might not be powerful enough to render anything but the text; or it might be typewriter and printer combination). So you get TUI with short command names and short option names as well.

If you have a PC you can afford a graphical user interface. Depending on the machine of course. But you’d still use keyboard and/or mouse to interact with it.

Now look at modern smartphone or tablet. Keyboard is non-existent or hardly usable, there is no mouse—but there’s a sensor screen you can use for input. So the same program might look native in one environment and completely unusable in the other. Back in the day I tried to compile code with gcc on WinCE directly and there you had to invoke compiler and linker directly with typing commands using on-screen keyboard on 4″ 240×320 screen (or was it later 480×640 PDA? I don’t remember but it was not fun in either case).

As for hardware limitations, beside the famous “640K ought to be enough for anybody” you need to consider how well multitasking works (TSRs in DOS were fun to use and even more fun to write) and the whole approach to memory. We all remember those operating systems from IBM and Dmitry Zavalishin to push for uniform addressing for all resources no matter where they are located (because CPU can address large enough memory space to afford that).

Now about interfaces. Usually when you run a program it’s not run alone but you expect it to interact with other programs either on a local machine or via network connection. Especially if you consider web sites to be programs as well.

Programs change over time

The change happens for various reasons, it may be the change of the environment, or maybe some developer’s whim, but all programs change nevertheless.

Web “apps” are changing fast, system utilities usually change not so often but still do. For example GNU tar has been adding support for various external compressors beside compress and gzip over the years (and I vaguely remember that before 2000 you used tar xIf to extract from .tar.bz2 instead of modern ones. Similarly I doubt that original Unix System V cp had exactly the same functionality like the modern one.

Returning to the user interface concept, modern user interfaces are affected by the widespread existence of graphic accelerators. So while old GUIs drew a bitmap, newer ones compose GL textures or their analogues (that’s probably anything these days except old dinosaurs still using X11 and custom OSes that do not care about graphics much).

Programs tend to grow

While the previous claims should be obvious, this one might be harder to defend. I’m not going to claim “survival of the featurest” but I believe the programs collect features to adapt to the existing environment and to survive against their competitors.

To apply the First Law of Parkinson, programs grow to consume all resources. There’s always some universal goal to achieve. Previously it was Greenspun’s tenth rule, now it’s Zawinski’s Law, tomorrow it may be something else. But the principle will stand. I still remember that back in the day people joked which suite will become an OS—ACDSee or Nero. It turned out to be Chrome.

Beside that there seems that features are also added on “because I can” principle.

And the most important thing, features are rarely removed partly because of the mindset (I’d been participating in a project where removing any feature no matter how obsolete was met with a resistance from the project leader), partly because you need to support legacy software (read formats from the old versions of your own program or interact with some old third-party program, partly because your product is often judged by the number of features it offers. Of course sometimes features get removed like FTP and RSS support in Firefox, but in the same time it gained even more other features. I’m afraid though that the whole browser will be dropped as it only draws money from what is really important (paying salaries to the top management).

There’s not much you can do about it

In my opinion software becoming more and more bloated is a natural process. And because software does not exist by itself but is rather run in the environment, what you have in it is greatly influenced by it.

Removing features from the already existing software might be met with resistance, both because it’s a feature and because somebody is using it (or might use it). Plus it’s often too hard to remove a feature in principle because the software is too complex.

I said in the very beginning that programs evolve. And like with evolution itself, they often combine already existing elements (or interfaces) in new ways. For example, you had a program to combine files in a single archive (aka tar) since 1979, you had program to compress single files (aka compress) since 1984. And only a year later you got ARC that combined both compression and packing several files into one simultaneously. But even if ZIP exists that does not mean .tar.gz is going away. Or the dreaded npm.js created from already existing engine for running JavaScript and already existing concept of “language on rails”.

The downside of evolution is that you’ll have existing elements used in non-intended ways and even seemingly easy operation may be implemented in a very convoluted way (go ahead and read how blood clotting process works when you cut your finger).

Of course you can rewrite part of it to have a clearer design or even rewrite a whole application from scratch. But there you have an insurmountable problem—complexity. With features accumulating and depending on each other you may end with a design that scares people away with just its number of components. And for the braver ones there’s a number of connections between those components. That’s not talking about the fact that you can’t dedicate a lot of time on making the next release and hope to still be relevant. There are rare exceptions like Xvid but they are exceptions.

Let’s take a browser as an example. It still has to follow certain standards and if you write one you have three options: use some external libraries for some of the functionality (not controlling their bloat), limit browser to a very small subset of the standards (thus ensuring that wide audience can’t use it), or spend a huge time and effort implementing it all. I can’t find a link but a couple of years ago somebody counted how many words HTML5 specification and the specification it links to (CSS, SVG and such) contain and it turned out to be tens of millions of words, which is impossible to read in a sane amount of time let alone comprehend and use it to implement something.

Another example close to my heart is video codec design. Modern mainstream video codecs are not that different in design from much older video codecs, they just have more ways of processing data that are applied to the input in some combination (those ways are even called tools). And a lot of them are not new conceptually, they were proposed and rejected previously because they took too much resources by older standards or they are more effective on larger video frames (we went a long way from QCIF MPEG1 clips to 4K video). As the result you have an increase in video decoder complexity. Couple of years ago I heard a claim (and it turned out to be true) that dav1d, a decoder for AV1 format and nothing else, has more assembly code (for DSP routines mostly) than whole of libavcodec, the library for decoding hundreds of various video and audio formats.

So just enjoy the progress in microelectronics and see how Moore’s Law battles Gates’s Law.

P.S. Somebody knowing my work may ask how NihAV fits into this. I can say it’s a project for my own research so it can be very feature limited (like not supporting WebMKV formats) since it’s not intended to replace anything else. Yet it also has accumulated some bloat already and will get more in the future (if I keep working on it). That is a natural process after all.

Comments are closed.