Why are modern software applications so terrible?
A lot of modern software is really quite bad.
You know what I’m talking about. It’s most true of the software you interface with directly as a human—applications. Modern applications are buggy, and lock up or crash all the time; the time since your last incident can probably be measured in hours. They are also enormous and slow. The hardware they run on is dramatically more powerful than it was a quarter century ago, yet for the most part, we see apps consuming similar percentages of CPU cycles and RAM, and feeling more or less as sluggish as ever.
How did we get here?
Two things sell software to Jane Q. Public: being available now, and having all the features she needs. Stability and efficiency, arguably at least as vital to a good user experience, are simply lower priorities in the slightly irrational mind of the consumer. The industry focuses accordingly.
To get to market faster, we lower the bar for what constitutes a minimum viable product to the point where many products are truly atrocious at launch. Now that we have the internet, we can push the development of core functionality right up to post-release patches and beyond. It’s somehow become acceptable to ship something broken and roll out fixes over weeks or months (or never). Users are wholly accustomed to it, apparently holding software to a uniquely low standard among the products they purchase. We also depend on an increasing number of frameworks, libraries, and runtime environments to build software faster, but we pay for it with immense bloat.
To advertise more features, to be as much to as many people as possible, developers invariably prioritize developing new features over fixing the existing ones. From an engineering perspective, it seems obvious that core functions should work correctly before any effort is put toward additional functions, yet this is not at all reflected in how most projects process issue backlogs. New features pull in new dependencies, and while it’s great to leverage a third party’s upkeep and improvements, it does increase our overall exposure to third party bugs, not to mention that those layers of abstraction are the likely source of the most stubborn bugs in the application itself.
Agile practices have indisputably enabled us to create more of what people want, faster. With proper discipline, the resulting software should, in theory, also be stable and efficient, since those are objectively important user experience goals. Agile also does emphasize the development of correct and complete features, including rigorous testing. However, the actual priorities and output of a team are arguably more easily hijacked by market pressures from slightly irrational actors than they would be in more rigid traditional engineering methodologies, making it that much more difficult to create stable and efficient software.
Do stability and efficiency sell? In the long term, yes. If we want to build software that people have a good experience using, and are therefore inclined to continue using (i.e. purchasing new versions of, or subscribing to), we should in fact focus on building a reliable, lightweight product that does its core function well. We’ve all witnessed that oft-repeated cautionary tale: a once-dominant application balloons into a bloated, buggy mess and loses its market share to a fresh competitor. The consumer does become rational again, once the dust settles.
Comments