Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Niklaus Wirth was right and that is a problem (bowero.nl) similar stories update story
200 points by bowero | karma 132 | avg karma 7.76 2020-07-31 13:03:50 | hide | past | favorite | 198 comments



view as:

Seems to me the software features interact in a kind of network effect, so that the amount of resources required grows faster than amount of functionality.

For the most part, the output object/exe size should be the realm of the tooling, not the programmer's concern -- a dependency-walking package-manager, linker or loader -- and in dynamic languages, unused code should never be JITed. Of course, surrounding issues such as choosing bad or incompatible libraries, etc., is another matter.

There’s no way a linker, no matter how smart, can give us back the object size and performance of the ‘90s.

For starters, there’s 64-bits. Pointers were a quarter of the size in the ‘90s.

Then, there is Unicode. All programs need ICU (http://userguide.icu-project.org/icudata). Even if you dynamically link it, many of its symbols (or entry point IDs) end up in your executable.

Unicode isn’t an exception, though. Every library choice you make adds a bit more in size and takes a bit more in performance than the solution from the ‘90s would have.

For example, the moment you decide to use json, you get its entire feature set (arbitrarily nested arrays, a multi-line string parser, ability to read fields in arbitrary order, etc), even if all you need to do is pass two integers and get one back.

A parser generator that generates code for the json subset you need would help here, but would mean extra work for the programmer and the overhead typically isn’t that large, so why bother? It all adds up, though.

Even if you can’t remove some code, you still could optimize memory layout to move code you expect will rarely run into separate code pages so that it likely will never be mapped into memory, but that’s serious work.

And of course there’s all that argument checking/buffer overflow protection people do nowadays that ‘wasn’t necessary’ in the ‘90s.


The article clearly focuses size due to the use of libraries.

As for pointer size, personally I agree -- most processes can get by just fine with their own 32bit address space, so I'm not sure why we need to double the working-set size of all pointer-based data-structures.


> so I'm not sure why we need to double the working-set size of all pointer-based data-structures.

If your data structures can fit in a 32-bit address space, you can just place them in an arena w/ 32-bit indexes. You do need to use a custom allocator for every element of that data structure, but other than that it ought to be feasible. Link/pointer-based data structures should be used with caution anyway, due to their poor cache performance


I was expecting Moore's Law to give us a renaissance in algorithmic thinking, but The Cloud has shown me I was wrong. First, we're going to have to fully explore Amdahl's law.

Eventually every problem goes to logn time, best case. The logn factor shows up over and over, from constraints on on-chip cache to communication delays to adding numbers of arbitrary size. We make a lot of problems look like they are O(1) until the moment they don't fit into one machine word, one cache line, one cache, into memory, onto one machine, onto one switch, into one rack, into one room, into one data center, into one zip code, onto one planet.

If we can't solve the problem for all customers, we dodge, pick a smaller more lucrative problem that only works for a subset of our customers, and then pretend like we can solve any problem we want to, we just didn't want to solve that problem.


Herb Sutter had a wonderful talk on constant factor plague. As much as I like clojure and similar convenience and algorithmic beauty.. I can appreciate the devil-is-in-the-details much more since this video (forgot the title sorry)

Could you please dig it up? I'm not sure hot to find it based on what you posted and would love to watch it.

Was it this one(30 minutes in)?

https://channel9.msdn.com/Events/Build/2014/2-661

Probably one of my all time favorite performance talks.


First half doesn't ring a bell, but cache talk and object sea vs contiguous was part of what I remember

I went looking for the talk and all I could find was this paper: https://www.researchgate.net/publication/337590358_Inflation...

I'd like to watch the actual talk. Hope you could try remembering the name.


maybe this one https://www.youtube.com/watch?v=L7zSU9HI-6I .. damn memory (sic)

> First, we're going to have to fully explore Amdahl's law.

Funny! Depressing :(


I feel like we had this debate a couple months ago. Someone posted that the computer at his local library could search and display data on available books super fast. (I think it was a twitter thread). The interface was programmed in the 90s or something. And then they complained about software today.

My reply was that now days you can, at home, search for a book on a specific interlibrary system, and find what specific libraries have it, download and check out an Ecopy, find out the number due back if you still wanted the hard one - AND have someone go hold it.

It's just not apples and apples anymore. I don't care how fast you can scan a text file in 90s written software.


There is not reason you can't have the connectivity without having the unnecessary complexity of modern computing.

Alternative question: Why do I need to download 50 GB of book index to search for that one title and that index can't even do a full text search?

That complexity is Google (or your favourite search engine) running a datacenter indexing exabytes of data so that you can search it in the blink of an eye. Yes, it takes 600ms now instead of 50ms - but that's like complaining that your new eco car engine can't even properly run on aftermarket lamp petroleum.


What does google's datacenter for a book index have to do with editing a file or rendering a button? That doesn't seem relevant at all to Wirth's law and isn't an justification for the increase in abstraction that has made things slower.

Why does any of that require a slow, bloated UI? The UI is largely what people mean when they say software is slow and bloated.

Considering Wirth was apparently having that debate back in 1995 and even then it was far from a new idea, i doubt we'll solve it any time soon.

Software will just keep getting bloatier and bloatier and hardware more and more powerful to cope with the software inefficiencies (what? you think hitting hardware limits will solve this? nah, we'll just put more cpus in there so that software can be slow in parallel and of course train users to think slow software is normal).


That said it's a bit weird to download a webpage of several megabytes that allows you to input a search that takes seconds to return a result, just to do the equivalent of a text search on a text file that's likely smaller than the webpage itself.

This isn't true for all search engines, but it is true for quite a few of them.


So, having multiple tools with more capabilities makes up for being worse at a given task?

The argument got rather confused at the end. The todo list has 13,000 dependencies precisely because the NPM community follows the advice here to create many small libraries. So is that supposed to be a good or bad thing?

Yeah...not sure where they were going there. The many small libraries becomes a problem when said libs go out of sync, i mean that is where dependency hell bites.

After 40 years I've decided that dependencies are the root all evil. There is something to be said for curated libraries and frameworks.

That depends on who is doing the supposing.

It's the fanout of frameworks, not the utility libraries

"1995 was the year in which programmers stopped thinking about the quality of their programs"

Ok Boomer


And the trend is not slowing. That said some people like low fat computing.

Another thing.. computing is over. At least in the previous era form. It's not bringing dreams anymore, will probably turn into an ubiquitous invisible form where intrinsic details such as resource usage won't matter.


I think you might be a little pessimistic here - IOT has a wealth of fruit to bear, from:

low power always-on devices with long range radio (LoRa), few resources (32k ram), the security constraints of securing every device, dealing with terabytes of log data in the cloud, ML at the edge (Kendryte K210), open-source firmware including radio (DASH7 firmware), open-source hardware (and open-source FPGA tooling) to create custom hardware designs (hardware implementation of algorithms), formal verification of OS and protocols, etc etc.

Even a handful of innovation in any of the above would be groundbreaking.


I'm a bit jaded and at the same time not much.

These are all very advanced low level technology subjects (some of which I like a lot btw).

What Wirth said only concerns a few grams of people on earth, the rest will stop using computers just like they stopped using desktops and just stream/talk on smartphones. If you ask most users they'll probably root for whatever electron app they use compared to frugal but powerful programs. For the layman computing will fade and become like roads. And I believe they never really needed nor liked computers, it just was a 20 year period where it was thought to be a technological wonder to have in your home.


I think a lot of this is governed by Conway's Adage: the software reflects the social systems that built it. If you look at the larger software ecosystem, it reflects the larger society that built it. The priorities and social customs of communities are leading to the "bloat." It's hard to say whether that is a growth on society's part or that software is simply still catching up with society.

Back when a ton of software was written by graduate students... it was small (because no time), fast-ish (because small), and buggy as all heck (because no time).

There’s a few issues with this post. It starts off great but falls down hard.

For one, PHP did not start out object oriented. Nor did it come with an IDE. Sure, it was dynamically typed (unlike Java). If I recall, it was 1994 too but I can’t be sure.

It was only PHP4 that added an abomination of a class system that motivated the current object system in PHP5. In fact, the problems with PHP have more to do with that pattern: creating abominations that motivate the development of something that isn’t a total abomination (often whilst retaining said abomination for a while).

And, as others have said, small “libraries” is exactly why the todo app needs 13k NPM packages.

I say “libraries” but you can’t call them that. I mean where’s the analogy? A library of one function? Behave


There are many complaints but there doesn't seem to be a real movement to make/use/cultivate small and fast software for all purposes.

I'd join.

Fragmented parts could be suckless.org, old cheap thinkpads from ebay, fast Linux distros, unix command line tools, retrocomputing, raspberry pi; all things with communities and fans who like a certain quality, simplicity and the good old days.


check out the suckless suite of software. it's neat :)

Suckless is neat, but it stuck in the past, in a way. Using modern programming languages and native frameworks is the way to go. But the philosophy is exactly what required.

There is also handemade.network. Maybe that's something for you.

https://handmade.network/manifesto


Someone could write a manifesto. I hear clock cycles are bad for the environment. And what if we want to run that software on our spaceship on the way to A Centauri?

There is an analogous problem in city design, wherein it was discovered that it was empirically not possible to get the average commute down below a certain time, because as more roads were built people moved further away from town. If you build more lanes to your roads, you don't get a faster commute, you get more sprawl.

People spend time making software less bloated, when it's the number one problem they have. When hardware speed is taking care of making that only the #2 or #3 problem they have, then they will work on whatever the #1 problem is, meanwhile adding more software bloat.

When Moore's Law once and for all stops, due to some law of physics reason, then software bloat will become a priority. Until then, other things are.


Yeah, you are describing Braess's Paradox: https://en.wikipedia.org/wiki/Braess%27s_paradox

It is a form of Induced Demand: https://en.wikipedia.org/wiki/Induced_demand


Meh, I think this whole argument is wrong. Software "bloat" is fine.

I'm reminded of this [1] article comparing Bruguet and Carson numbers. You have ever cheaper hardware. The marginal utility of any given unit of new, equally serviceable computing power is going to be less than the previous one. So eventually if something can be 2% better/more functionality, etc. at 10x the bloat it is rational to accept that. When you have lots of something you become less efficient in how you use it. If you're surprised by that go and read economics.

[1] https://www.cringely.com/2011/05/19/google-at-carsons-speed/


It comes with a cost. If not money then environmental footprint. Bitcoin is a good example. Or a toxic deadlands of computer and mobile scrap. Just because companies think it's cheap and blind developers think it's acceptable to bloat.

I do see this as a gap/opportunity in a lot of existing markets. Huge players who dominate these areas are doing so with these giant, slow, unwieldy web apps. Look at how Figma managed to take over in the design market by creating a lighter/faster product than what everyone else was offering. I wholly endorse any other teams that want to use Wasm to kick stagnant old web apps off their thrones.

Surely Figma actually took over that market by 1) making a web-based version of a category of software that was primarily desktop-based, and 2) giving it away for free? I don't think performance has much to do with it.

This article is the problem.

It's the same freaking fallacy I see again and again on here - It's simple, easy to understand, and dead fucking wrong.

The vast majority of the complexity you're dealing with in modern computing comes from three sources. In the order of impact

1. Networking. It turns out there are real and hard limits on how fast we can pass data around over copper wires. Fiber is better, but you just literally cannot move faster than light, and latency is a big deal when the items you're accessing and changing live somewhere else (and basically everything of value does live somewhere else, or thinks you are "somewhere else").

2. Security. This is a direct consequence of number 1. When everything is connected, everything is connected. You can't just lock the lab door and call it a day now.

3. Compatibility. This is a direct consequence of both 1 and 2. Value is a consequence of compatibility (this is why a good chunk of you on here still support IE, even though you don't want to). We have more devices, of more kinds than ever before. We have more people, with more use cases than ever before. There is value in being as compatible as possible with all those devices and people. All those devices are connected in ways designed to keep them compatible, but also secure. It turns out this is not an easy task.

If you'd like to go wank off over how fast your pre-network, unsecured, unsupported, inaccessible and manually configured systems are, be my guest (oh, and I hope you read english...). The rest of us will continue to produce items of value.


No, it's the other way around. Advances in computer hardware have allowed the use more inefficient programming languages allowing more inexperienced and unskilled programmers to create programs leading to more resource hungry programs. When there are little resource constraints the only real constraint becomes developer time.

I don't see how having IDEs implemented in browsers has anything to do with security, the speed of light or compatibility. It's just the lack of constraints allowed by advances in computer hardware.

Most software is written with no performance considerations in mind at first and the performance issues are addressed only when they become visible. However, if there is abundant memory available, why bother?


> I don't see how having IDEs implemented in browsers has anything to do with security, the speed of light or compatibility. It's just the lack of constraints allowed by advances in computer hardware.

This isn't a compatibility issue? We've seen about 8-16 branches of the write-once, run-everywhere tree over the past 25 years, I'm not sure how that isn't seen as a constraint on programmers. JWT, Swing, Web, Cordova, QT, React/<web front-end> Native, Xamarin, Electron, Flutter and even quirky ones like Toga have all attempted to solve this problem. The only unifying thread has been that managers follow greedy algorithms and choose the lowest common denominator platforms as possible. QT, the Java tools and Xamarin at least can't be lumped into the inefficient language bucket, though the UX is just awful. Other than hardware drivers, it's hard to think of a clearer example of compatibility constraints.


> JWT, Swing, Web, Cordova, React/<web front-end> Native, Xamarin, Electron, Flutter and even quirky ones like Toga have all attempted to solve this problem, and

... and for the most part they have. You can write your app right now and the only thing you need to worry about is screen size. If you use bootstrap, even this is mostly solved. Your app is accessible on Windows, Linux and Mac; Chromebooks and Tablets; iPhones, Android and even the one Symbian user. Of course it's not perfect yet, there are edge cases and you cannot do everything, but let's not act like things have gotten worse.

> The only unifying thread has been that managers follow greedy algorithms and choose the lowest common denominator platforms as possible.

Yes, I agree. But for nearly every use case, it's good enough. Take HN as an example: Does it need anything more?

Of course, if you need access to specific hardware, you'll have to go deeper. But if you do not, it would simply be you taking the lowest common denominator. And I'd argue that the framework probably did a more thorough search.


I basically agree, with the caveat that I'd still prefer a world where our write-once-run-everywhere lowest common denominator at least required native widgets and an ability to integrate new platform-specific capabilities at the expense of writing a small amount of native code, rather than barfing up web UI/UX at users (e.g. the execrable MS Teams).

While there is certainly more complexity in modern software, it does not necessarily need to translate to increased memory, CPU usage and increased latency for the user. Are you saying that this increase in software complexity definitely increases these requirements?

Java is definitely not a good example of a memory-efficient language when compared to its non-GC alternatives.

It all comes down to economics, software is written as inefficiently as possible as long as it does it jobs and is not hindered by this and this actually the crux of "Wirth's law".


The “Java bloat” mostly has nothing to do with the language itself, but is caused by the ecosystem around it. On one hand you have overly abstract frameworks and on the other you have inexperienced programmers who don't understand how such frameworks are supposed to be used and write code that actively fights against the framework...

"allowing more inexperienced and unskilled programmers to create programs"

You realize that this is a good thing, right?


Until you ask those inexperienced and unskilled people to do something important.

Good for them, or for me?

For all of us.

None of the above require software to be as slow as it often is.

Windows being able to run Win32 applications isn't what makes Windows Calc slow nor is anything else you mentioned - you do not need a supercomputer to download currency conversion rates (which, btw, is bloat by itself and could be handled by a dedicated program instead of being shoved into calc) and even the ability to download a CSV or whatever from a remote server via HTTPS (all you'd really need to make such an update) is something the OS can provide to every application and keep secure for everyone (and you know what, this sort of functionality was something Windows provided ever since Win98, but how many people use it vs bundling their own?).


Calc is my favourite example too. Win+R+calc+enter and start typing, it used to work no matter how fast I was. Not anymore, I sit waiting at its convenience.

Android is pretty inconsistent too, the same series of steps are instant or hang the UI for 5 seconds depending on the current moon phase or something I haven't worked out yet.


The Windows 10 calculator even has a loading screen. The UI is hideously large. It's like a joke of what "modern" software has become, except that it's actually serious.

Not sure how any of this explains why a text editor can't fit in some countably finite number of kilobytes.

Compatibility might contribute?

Your list is interesting because of how it overlaps with yet reframes mine (from http://akkartik.name/about):

A. Backwards compatibility considerations. Early mistakes in the design of an interface are often perpetuated indefinitely. Supporting them takes code. Projects that add many new features also accumulate many missteps. Over time the weight of these past adaptations starts to prevent future adaptation.

B. Churn in personnel. If a project lasts long enough early contributors eventually leave and are replaced by new ones. The new ones have holes in their knowledge of the codebase, all the different facilities provided, the reasons why design decisions were made just so. Peter Naur pointed out back in 1985 (http://akkartik.name/naur.pdf) the odd fact that that no matter how much documentation we write, we can't seem to help newcomers understand our programs without talking to the original authors. In-person interactive conversations tend to be a precious resource; there's only so many of them newcomers can have before they need to start contributing to a project, and there's only so much bandwidth the old hands have to review changes for unnecessary complexity or over-engineering. Personnel churn is a lossy process; every generation of programmers on a project tends to know less about it, and to be less in control of it.

C. Vestigial features. Even after accounting for compatibility considerations, projects past a certain age often have features that can be removed. However, such simplification rarely happens because of the risk of regressions. We forget precisely why we did what we did, and that forces us to choose between reintroducing regressions or continuing to cargo-cult old solutions long after they've become unnecessary.

---

I can't really rebut anything you say. I'm going to keep it on my radar as I go about my project. It's currently pre-network, unsecured and inaccessible. And will always be manually configured, for reasons described in the link. But it's supported, for what that's worth.


The speed of a signal in wire or fiber is largely irrelevant. Most of your latency is taken going up and down your ginormous software stack. Take a look at what is required for wire-speed processing.

Security is improved by reducing the number of disparate components.


Networking isn't the bottleneck. As John Carmack quipped, "I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?"

Usually if there is latency caused by networking in an application it's usually unneeded roundtrips caused by inefficient programmers or inefficient layers in the software stack.

It's insane how much overhead there is.


I am completely baffled by this quip.

At speed of light in a vacuum, from New York to London is 19ms. That's the physical limit of what a network could ever hope to accomplish, but in reality in fiber it's apparently about 28ms.

At 60fps we need to present a new set of pixels to the screen every 16ms.

So in the time that a packet goes in one direction to Europe we could have presented almost two full frames of pixels.

Even on the rather slow old SoC in the Google Nest Home Hub platform that I work on, we're able to do quite a bit of pixel crunching in that 16ms. Even with code written in JavaScript, or Dart. Enough to make our users mostly happy.

John Carmack is much smarter than me, so I can't believe he meant this literally, or it's been taken out of context.

The network is definitely a bottleneck.


You haven't even tried to take the shocking amount of latency introduced by the hardware and the software stack into consideration, and it's what prompted Carmack to make his comment.

A new set of pixels every 16ms is about throughput and we do have that. But the latency today is worse than in the VGA era.

If you have a completely black screen and want to draw a tiny white circle in the middle of it, it will take your processor or GPU less than a microsecond to change the bytes. Less than 16 milliseconds later (an average of 8ms, actually) the updated bytes will be flowing through the HDMI wires and into the monitor. There they will be stored into a local buffer. If there is a mismatch between the image format and the LCD panel then it will probably be copied to a second buffer. Some DMA hardware will then send the bytes to the drivers for the LCD and the light going through those pixels will change. All that can easily add up to 50ms or more.


Here’s his reasoning: https://superuser.com/a/419167

“ The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.”

What he’s talking about is (in his opinion) unnecessary buffering that causes a delay in the pixel actually appearing on screen.

He blames the driver and the display’s internal software, so his argument could be made out to support OP, but I think the situation is a bit more complex than Wirth’s law here.


Well, yes, I suspected it was something like this.

I'm well aware of this kind of bloating (guess I should have said something in my comment to avoid the downvotes...) but it still doesn't support the OP's comment.

Network latency is not only high, but there's literally nothing that can be done of it -- because of the speed of light!

(I am somewhat lucky to work on something where we can optimize away much of the crap you're talking about here, as we own the whole package.)


The person you initially negatively responded to said "networking is not the bottleneck," and if it's possible to have a meaningful negative reaction to that, it might involve asking "the bottleneck in what system?" I think he's right, but it's a blanket statement and it's fair to ask for more context.

More context: typical network latency is good enough that video games rendered on a remote server are becoming practical, or at least salable. "Network latency is high" is a vague enough statement that it could mean anything, but if being able to render video games remotely and stream the output to the client doesn't make you reconsider, I question what you would ever consider network latency that's not too high.

The kicker with these games, that perhaps speaks to the original, crazy post by horsawlarway, is that it's normal for a TV set and set of controls to introduce a lot more latency than the network connection itself: the network is not the bottleneck. There's a good excuse for the latency in involved in networking, rooted in physics, but this is not true for the hardware and the software stack.


Well, yes, I recently worked on the video receiver component for Stadia on Chromecast. So I know a little about these things.

Perhaps you can see why that makes your comments all the more baffling? It's understandable that you might view the network as a UI bottleneck since you were working on an application that relies maximally on low-latency networking, but you must realize how unusual that is, and how fast typical networking actually is in order to make your work possible at all. (and to the original point, how lame an excuse network latency is for those who can't manage to cobble together a fast implementation of a much, much simpler application)

It's just round trip to Europe from North America (esp western north america) is actually ... an eternity, could easily drift into 100ms -- and not one I can optimize to get rid of. Whereas I can work my way down the software stack and find bottlenecks and deal with them.

Yes I can't control what TV manufacturers do, that is a wild card. But the quote taken out of context is more than a little inflammatory -- the network has a hard physical limit that the local device does not.

FWIW I'm just as dissatisfied with software bloat as the next person. Retro computing is one of my hobbies, and the latency measurements there are something to be envious of.

ChromeOS generally does better in latency measurements than other platforms; much effort was made there, much of it by people I know.


You have presented almost 2 full frames of pixels in 28ms, but how many frames out of date were they when they made it to a human's retina?

That's just not true.

1. Networking isn't the number one source of complexity, since local apps can be bloated, slow, buggy, and complex.

2. Security is not a consequence of networking. A non-networked system can be insecure, and a networked system can be secure. Security is more normally viewed in the context of confidentiallity, integrity, and availibility along with the concepts of authentication, authorization, and accountability.

3. Compatibility is not a consequence of either networking or security.

Your comment makes it seem that you believe complexity comes from having to adhere to constraints. That attitude is the issue! Not wanting to adhere to constraints is why software is bloated, complex, full of thousands of components, slow, etc.


It still seems useful to consider how security and compatibility add complexity to our software stacks, even if the causal chain of networking -> security -> compatibility isn't right.

Regarding 3, I think part of the problem of ecosystem fragmentation has to do with large tech companies finding their favorite niche and being reluctant to compete with each other.

I have a theory that this has something to do with incentives: competition can be good for the company that is able to grab another's territory, but on average it's usually a loss. That's especially true from the perspective of shareholders, most of whom don't just hold one stock but many. Some competition is necessary, but too much is inefficient. If you own stock in both Apple and Google, you're happy with them each having their own group of loyal phone buyers. Spending more money than necessary to try to attract the other company's customers to switch is almost a pure loss from the point of view of the shareholder. Companies might not explicitly collude to divide markets (which would be against anti-trust law), but they're still subject to the disapproval of their shareholders if they rock the boat too much. Sometimes I think it makes more sense to think of all publicly-traded companies as being a sort of loosely-structured single corporation rather than truly separate competing entities.

So we have all these different software ecosystems that all these different businesses have built their respective walled gardens around, but software developers suffer because they can't just write to one platform and have their application be portable. Instead, they need a totally different application if they want to be accessible to desktop PC users, Android, iPhone, the web, tablets, servers, enterprise customers, HPC, cloud, game consoles and so on. Sure those users all have different needs, but surely they could have quite a bit more cross-platform consistency and common tools and standards than what we have now.


Ah yes let's pine for the days when ~men were men~programmers were programmers and we coded real quality, unlike this current trend...

...and furry little creatures from Alpha Centauri were real furry little creatures from Alpha Centauri.

"In economics, the Jevons paradox (/'d??v?nz/; sometimes Jevons effect) occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand.[1]"

https://en.wikipedia.org/wiki/Jevons_paradox


See modern IRC clients that take 500MB of memory on the desktop. Compare with Orcad Capture a mid 90's schematic capture program that would run acceptably on a 486 with 16MB of memory.

To be clear, the "efficiency with which a resource is used" in this case can be represented roughly by dollar per unit compute. As that value goes down, you would expect software "waste" to go up. That's how the Jevons Paradox applies here.

The referenced AppSignal post about 13,000 dependencies for a todo list is conflating dependencies required for the build tooling with dependencies that are actually bundled in to the web app.

It's also hilarious that right after referencing the dependency problem in the JS ecosystem the OP then goes and advocates splitting up your library into a bunch of mini libraries. That's exactly how we got into this mess in the first place.

Also, computer speed has definitely gotten faster. There was a time period during which this was not the case, but particularly the switch to SSDs resulted in a massive jump in computer responsiveness. (The previous jump happened when we no longer needed to wait for dial up to establish a connection)


Responsiveness in terms of keypress latency has gotten much worse over time.

you should include a reference: https://danluu.com/input-lag/

Well, that is an awesome read, thanks!

For a more succinct tldr of the same thing:

https://mobile.twitter.com/id_aa_carmack/status/193480622533...


that was one of the most enraging non-political things I've read in years

In my experience what the switch to SSDs resulted into was Windows 10 becoming practically unusable on a mechanical HDD. Comparing how long the OS needs to start up, launch applications, etc on my laptop with Win10 vs a much older laptop with Win8 (the first and last time Microsoft bothered to optimize Windows the last decade) is night and day and that older laptop originally came with Vista!

When i first installed Win8 on that laptop i was so impressed i even took a video[0] of it (and yes, it seems like they kept that performance in 8, though sadly by 8.1 it became slower).

[0] https://www.youtube.com/watch?v=Ti3LQHXZ0Qg


The same happens on Macs. Starting Safari on an HD-based Mac is painfully slow.

At Microcenter you can buy a house brand SSD hard drive with 128G on it for $58 and boot from that and store all your other files on a D:\ regular hard drive. That is how I set up my son's Windows 10 gaming rig.

Just know you can run a full Linux desktop experience on top of twenty-year-old spinning rust just fine. Micro$oft drooped the ball big time if one needs a ssd just to use a OS.

It needs a SSD drive to boot faster. Otherwise it takes a few minutes instead of seconds.

Windows 10 is for newer hardware anyway, it runs slow on older hardware.


You are completely missing their point. "For newer hardware" means slower. I remember how Windows 7 was so well-designed that it actually ran faster on my all of my old XP machines than XP did. It was a step up in performance, not a step down.

In 2018 Microsoft disbanded the Windows team and moved engineering efforts to its cloud and AI teams. Windows 10 is on life support by engineers who aren't intimately familiar with its codebase. Performance and usability will only degrade in this situation and the specifications of newer hardware don't excuse constant deterioration in performance.


All non Azure MS products are legacy now.

Windows 10 is probably slow on HDDs because it's updating a load of UWP apps in the background, resmon.exe disk tab will show what's going on.


Ironically, Mark Russinovich, the author of Regmon, is now responsible for Azure

It was a re-org, those engineering teams just moved to be under CAI or ED. The people are still there.

The people are still there, but there's no organizational impetus to always use them, plus they have other duties now. The team being disbanded means a loss in product tightness and quality control.

That doesn't make any sense to me. Are you saying you work on one of these teams?

they optimized for current hardware, sacrificing some performance on outdated hardware.

I'd rather windows uses my SSD to the fullest, than that it makes concessions to spinning disks.


I'm not sure this is the right take, as I don't believe it's a matter of choosing to optimize for SSD vs. HDD's. Disks got faster, so the bar for baseline acceptable performance got higher.

How is Windows using SSD to its fullest? All it seems to be doing is taking advantage of the greater speed to perform more I/O that it didn't do before which negates any performance improvements the SSD brings.

Yeah, my main PC is all SSDs. I might replace my laptop's mechanical disk with an SSD but i use it very rarely. If anything i use my retro(ish) PC more often than my laptop so i'm thinking i might buy a cheap SSD for that instead :-P.

There are countries where you will live with that $58 dollar for 2 weeks (or more). Maybe not everyone has the money to buy a SSD.

I agree with the core point of the article, but I agree with your points as well. The conclusion of the article is definitely wrong.

I recently buddied up with someone who has a CS degree to try and make a desktop app, as a side project. I asked if he is familiar with MariaDB and whether or not he's used C++ ODBC.

Our discussion quickly turned into choosing libraries / existing code. He uses Spring and Hibernate and was bewildered by the concept of tying columns to application variables "manually". After I told him I have some CentOS servers where we could put a shared database, he bought a Windows Server because he "needs a GUI". He added, "Ideally, after it's setup, we'll never even have to remote in to it." For the actual desktop application, he wants to use Electron.

It seems there are two distinct branches of computing emerging - one where performance matters, and one where it doesn't. Performance will always matter in places like the stock market or on IBM mainframes. In the consumer-facing world, All that seems to matter is perceived performance. Slack and Discord seem fast, when you watch how quickly a new message pops up on your laptop screen after you sent it on your phone. They seem egregiously slow when you open up task manager and see just how much overhead the chromium-based engine adds -- but most people won't care.

Applications made for "consumers" tend to be made like a cheap car - corners are cut, the end result isn't pretty, but things in the category are what makes up 90% of ordinary use cases. Slack isn't meant to be open on your work machine while you're compiling code in the same way a Prius isn't meant to chauffeur top-level executives. It doesn't mean that the Prius is bad or unimportant - it will do far more for more people than the entire lineup of many luxury car brands.

But I am damn sure that I'd rather be engineering a Bentley than a Prius.

Edit: In the metaphor, luxury and performance are sacrificed, not efficiency. I probably should've used Fiat Chrysler, but unlike Fiat Chrysler, consumer-facing software has its place. I just don't enjoy working on it myself.


> but most people won't care.

Charity Majors said about reliability: "Nines don’t matter if users aren’t happy."

I'd propose an alternative view here: "Bloat doesn't matter if users are happy."


Actually one of my desktop products received very much praises from the customers who (aside of nice features and responsiveness) also loved how few MB big single exe with no dependencies does so much more and faster than 1.5 GB download from the competitor.

My other current product custom made for a customer in C++ is business server that receives various commands over http, does lots of calculations and delivers data that are used to generate a report (I wrote a report generator in JavaScript and it runs on a browser). The whole thing is again single exe that needs nothing but connection to a database. The exe itself is around 1MB vs again gigabytes for the old mess they were using before and is more than 100 fold more performant. The customer was simply shocked when they saw a result.They did not think it was possible.


Users are happy until they saw better :)

Today bloat doesn’t matter, tomorrow you’re are a fat dinosaur


Sure, but... that confirms this view.

This view is more akin to working class parents telling their children to become a builder because “people have always been building”.

You should not outsource your main product. And if your product is a chat application that is supposed to be fast and always on the background, then you’re outsourcing your main area of expertise.


I have to say I don't really get the analogy with Bentley vs Prius. Engineering the Prius seems like it's both more challenging and rewarding. Constraints are tighter and expectations of reliability are higher, and efficiencies are more important - in fact, the Prius is all about efficiency.

(Also, the point about your someone "with a CS degree" - it's completely irrelevant that they had a CS degree, AFAICT. It mostly comes down to their prior experience, and they clearly didn't have a lot of it. That's mostly orthogonal to education.

I also wouldn't pick C++ for a CRUD app, not without very specific requirements. I'd choose C# if there's no existing web app, but if there's an existing web app, or there's going to be one, Electron makes a lot of sense.)


> "It seems there are two distinct branches of computing emerging - one where performance matters, and one where it doesn't."

The unfortunate corollary being that the prevalence of developers who treat it as if it doesn't makes it darned hard to find one who treats as if it does when you need one.

> "Applications made for "consumers" tend to be made like a cheap car"

Not really a great analogy. The Prius has a reputation for quality and reliability; it's often seen used in taxi fleets, where those qualities are a must. Luxury cars often have high maintenance requirements and poor reliability since cost is not a concern to their owners.


> Luxury cars often have high maintenance requirements and poor reliability since cost is not a concern to their owners.

This is absolutely a nit and does not disqualify your overarching statement at all, but generally these attributes are due to the constraints of technology-- for example, fitting large, high performance engines necessarily increases the frequency of maintenance intervals as well as the difficulty (and therefore, cost) of maintenance. If luxury car manufacturers could make cars with the handling and performance characteristics of their flagship models but with the maintenance costs and intervals of a Toyota Corolla, they absolutely would.


This tends not to be why luxury models are unreliable in practice.

Luxury models are used by the manufacturer as a proving ground for the tech they're going to filter down to the mass market models next. Luxury model customers are relatively price-insensitive but want to be able to show off shiny features, so they get the exclusive new tech that's not been tried at scale yet. Thus the unreliability. It's not the technical constraints, it's the maturity of the technology.


The main reason Priuses are used as cabs are local regulations, i.e. airports not allowing non-hybrid cabs, etc. For passengers experience is sub-standard compared to regular cabs.

What’s substandard about a Prius? They’re fantastic cars.

Dual Core and Quad core were also massive improvements in performance.

Good post. I would add that since Moore's law is over, I think the long-term prospects for crap like electron and discord are not good. Future phones and laptops will (almost certainly) never be orders of magnitude more powerful for regular workloads, so assuming that people will want to do more and more with these devices, it seems that software that burns millions of cpu cycles and gigabytes of ram just to put some pixels on the screen and send some http packets will eventually get squeezed out of the marketplace.

unfortunately, now the most messaging apps (teams, slack, signal come to mind) pack their UI with electron

You're right about a lot but I really want to argue against the idea "people don't care." They do care. They care when their phone dies after 2 years and they have to buy a new one because updates have rendered it unusable. They care when the phone/pc/laptop gets hot with only a few tabs open. Polls seem to show people generally care about climate change, and as little as PC's use in energy (relative to transportation and electricity generation for things like heating), they probably would care about this too. Users just don't understand things could be better, especially if they're younger and they don't remember windows 2000 or dos for example. On the other hand, every older person I know curses the newfangled things, even if there are some new connectivity capabilities of modern machines.

And the thing is that connectivity has little shit to do with the layers of abstraction to idk render a button on a screen. Even developers, as your anecdote shows, lack the creativity to imagine better things! Christ if that doesn't show something is wrong than what will?


I’m not even that old (early 30s) and I curse at 99% of the tech I use. Especially tech that we use to build other tech.

I started my career as a software engineer writing C, then C++ working on desktop software. Things made sense back then, and that was just 10 years ago.

I’ve tried contributing features to two Electron-based apps. I gave up on both. I just can’t make sense of it.

I’ve sat down countless times over the last decade telling myself I’m going to teach myself this damn frontend web stuff. I gave up every time. It’s ridiculous.

Young folks who have only been exposed to modern garbage have no idea how good it was before the web took over.

I often wonder how this came to be. How we went from well-documented, efficient, sound APIs and libraries to the monstrosity that we have now. I have nothing to offer though.


There's another type of consumer-facing computing where performance matters: embedded computing on small, weak devices.

These are also very cheap devices, but they rarely have the resources to run something as bloated as an Electron app.


> It seems there are two distinct branches of computing emerging - one where performance matters, and one where it doesn't.

The thing is - there is no need to address performance if it does not matter. That would lead you to Knuth:

"The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming." - Knuth

So I think there is actually a dial: if performance matters you turn it one way and optimize as necessary; if it doesn't go onto the next project that is fighting to get out of your head.


Please do not take this (Knuths statement) out of context. It is necessary read the whole text to understand this sentence. It has nothing to do with whether performance matters.

but sometimes performance is dependent upon your framework, so before you start you will need to know your quantity structure

I wish slack seemed fast for me. It's like typing into molasses.

>Slack isn't meant to be open on your work machine while you're compiling code

meant by who ? Because I certainly keep slack open while I am compiling. And it seems to me that it is meant to be kept open in the background while you do other stuff.


Sure there are two types (or more?) of dependencies, but at the end of the day the AppSignal app requires those +13,000 dependencies in order to work properly (first to build it, and then to run it). That's horrendous.

How many dependencies do we need to put food on the table? Perform a surgery? Or make an optically flat piece of glass?

I believe all pieces of technology have similar dependency graphs.


You might be missing the point. A hospital in which surgeries are performed should hopefully already contain all dependencies required for a surgery. So when a new surgery is starting, the hospital staff should not need to run out and get more supplies.

In the same way, the "dependencies" required to build my hardware aren't counted in that number. What counts is the extra stuff that isn't already part of the running system.


I had a totes badass response with [9] footnotes and backstory, an arc, with pull quotes and a heartfelt ending. I pulled it to post on medium for my sweet side hustle, you don't deserve my prose.

Take your "quotes" and use them to run NixOS on OpenBSD.


The referenced AppSignal post about 13,000 dependencies for a todo list is conflating dependencies required for the build tooling with dependencies that are actually bundled in to the web app.

The point is that a dependency is a dependency, and making software needlessly complex in any way is bad.

I've run into this a lot with open source stuff (not even JavaScript, just native code): I want to fix/modify something, and I know it's open-source, so it should be easy, right? After looking at the sprawling dependency tree, and the effort it would take to set up a build environment --- which might not even give me the same output as the original binaries, which are working perfectly fine except for the part I want to change --- I decide I'll just find the locations with a debugger and patch the binary directly.

tl;dr: the complaint is equally valid when applied to build tools, which are themselves also software and becoming bloated.


"Effort to set up a build environment" is pretty much exclusively a problem with C and C++ code. Pretty much every other language has standard build tooling. e.g. with JavaScript you can 'npm install' and you're set.

Unless of course the developer uses yarn or whatever npm alternative is popular this week. The last time I tried building an electron app, it just spit pages of dependency errors at me and I gave up.

Back in the '70s I worked on machines with head-per-track disks. SSDs begin to bring that responsiveness back.

I guess it's more of a natural selection process - the software house which can deliver a good enough working piece will (nearly) always beat the one which adds another two years of development time (and cost) to make the app a bit snappier. Ask Lotus Notes and Netscape.

I don't want to say that performance does not matter at all - it does - but with hardware being as cheap as it is and developer time and time-to-market being as expensive, optimizing that last 500ms and 200MB out simply is not going to be worth it.

And let's not disregard the expense of performance optimization - you'll not only need a reference to benchmark and test against, but also spend a lot of time debugging and writing very plattform-specific code with tons of edge cases. It's not like saving 2GB of memory comes for free.


Multiply that 500ms and 200mb by more than a few thousand users and you are talking about real time and money.

Everything seems to have been optimized for the enterprise market.


That's why I've mentioned time to market. Take GitHub as an example: It's neither the fastest/most lightweight git hosting page (that would probably be Gogs/Gitea) nor the most feature-rich (GitLab is far more advanced, as far as I can tell). They are where they are because they've made a viable product first. Same with slack: Electron is not a good solution, but it was a desktop app in 5 minutes - can't beat that.

I'm not trying to say performance doesn't matter - I'm using lower-powered devices myself - but development time is also a big factor for b2c.


That's why Chuck Moore, the creator of Forth, use to say that he can create any software with 99% less code that a commercial version. He always designs software thinking on how to keep it as close as possible to hardware and removing useless abstractions.

Sidenote: Niklaus Wirth designed Oberon, one of the coolest operating systems I’ve ever seen. You can run it on an emulator. It’s basically a graphical OS, which sounds like an oxymoron, until you realize that the programming language itself is also graphical.

It’s so unknown that it’s shocking. Imagine designing an entire OS that was used by dozens of people, and no one knows about it. http://worrydream.com/refs/Wirth%20-%20Project%20Oberon.pdf


What gets me is part of the '95 quote about computing in the 70s

"About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage"

Such a text editor likely couldn't handle lowercase in English, let alone any other Latin script language, let alone cjkv or bi-di. The bloat in software of 95 and the present day is real, but there is no real effort to make an apples to apples comparison in what our expectations of software are, and it massively weakens the argument

Parallel arguments can clearly be made for compilers etc.


The main question is what should you do with all that application code space other than make the functionality of your core application richer? (adding Unicode is a perfect example - do that and for English language use, the app might seem identical but it's become much richer).

What use would one million, highly functional, stand-alone 8K programs be? What about a hundred thousand 80K programs?

I'd love wholly new categories of software to just pop up. I think about what those might be. But it seems like actually they appear quite seldom. So what else is there to do but throw code at what people use every day, for marginal improvements in existing functionality.


Thought provoking questions for sure!

Some of the improvements are marginal, but not all, from a developer perspective moving between editors that support everything from syntax highlighting, intellisense, refactoring etc, to then using an editor that has none of that is quite a shock.


That text editor seems to have been enough for Wirth to create the Pascal and Oberon languages. VI came out in 1976. Even today, lots of people use VI or some derivative of it.

Even then a text editor isn't necessarily a necessity, I've been reading a lot of IBM 1130 code, (for example Guy Steele's LISP, and Chuck Moore's Forth)

There was no text editor. The code for these was entered on punched cards or tape. Somebody typed those cards from sources on paper, maybe on a bunch of K26-5994 forms


http://www.texteditors.org/cgi-bin/wiki.pl?TinyEditors

A lot of these are less than 8K, none of them can't handle lowercase, and I bet the majority of them will be fine with "high CP437" bytes (so other Latin languages.)


Of course, but editors written in 1970 (25 years before the Worth quote in question) often couldn't handle lowercase (and neither could early versions of Pascal) often because platform support was limited or missing.

Much of the cability of a modern tiny editor comes from the environment in which it is running, we just expect more from an editor now, and the developer expects more from their operating system.


“1995 was the year in which programmers stopped thinking about the quality of their programs.”

There are some valid points in this article, but seriously?


Actually it was the year that managers stoppped taking programmers in that know how to do quality control and focus on doing it faster with more bugs so it ships sooner. If you look at some of the MS-DOS programs they rn with little to few bugs, but Windows 95 ran them in MS-DOS mode. Later on Windows programs needed updates and bug fixes to make them work faster and every version of MS-Office fixed bugs but ran slower as CPUs and hard drives got faster they didn't focus on quality any more just shipping it with bugs and fix the bugs later on.

"Languages like these made programming a lot easier and took many things out of the programmer’s hands. They were object-oriented and came with things as an IDE and garbage collection.

This meant that programmers had fewer things to worry about"

Yeah, right. Those lazy programmers.

It's obviously become easier to build complex software, but software is now required to be much more complex. There are still many things to worry about (actually far more than previously, I'd say), but they're not the same things.


> About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage.

That would be in 1970, but my guess is that "ed" would be a hard sell today.

There is plenty of bloat to go around these days and I think we could do a lot more to address that. But we've all got too much skin in the web game to own up to the embarrassing fact that a chat program that's basically IRC with pictures feels like glue on a 2.4 GHz, multi-core CPU.

With that out of the way, we shouldn't get silly, either. Every actually useful feature added will increase complexity and resource usage. I like split-window, code-folding, auto-indented, word-completing, syntax highlighted multiple document editing more than I like saving a few fractions of a percent of my hard drive space.


Maybe ed would be a difficult sell today, but vi came out in 1976, the essence of which is still in use.

What we'd recognize as vi was first called "ex" - and since it was more featureful than ed, it was also a larger program [0]

The essence of BSD is also still in use, but I'm willing to bet that once this essence - much like that of ex - has been expanded into something we'd consider a capable system today, it's also going to require a lot more resources.

[0] https://minnie.tuhs.org/cgi-bin/utree.pl?file=2BSD/src/ex/RE...


> more than I like saving a few fractions of a percent of my hard drive space.

Hard disk space is seldom the actual issue. Instead, it's bandwidth used (too expensive to download over a mobile connection, or maybe not even feasible to download over a low-quality connection), or memory requirements (can't reliably use slack + spreadsheet + photoshop at the same time), or power consumption (laptop out of battery in 1 hour).

Do you like split-window, code-folding, etc etc so much that you can't download it while traveling in a rural area, have to close it so you can run Photoshop, and have to carry a spare battery so you can use it for the entirety of a 3-hour plane ride?


> bandwidth

Why would I download a text editor every time I wanted to use it? Even VS Code is stored locally, ready to be used off-line. But let's not pick an Electron app for _everything_ we want to do: last time I checked, Notepad++ does all of what I listed, and more.

> memory requirements

Yes, this can be a problem and is a sure sign of bloat. Meanwhile, I think we're bad at picking our software: why do we just sit back and accept stuff like this? I've got 8 gigs of RAM in my dirt cheap home laptop and I can run Gimp, GNumeric, Firefox and watch a movie at the same time just fine, with plenty of RAM left to spare. For professional use, requirements are and have always been higher: hence the $15k workstations of yesteryear.

I think we're doing a bad job at promoting that the use of native software would likely have the outcome of higher productivity and lower hardware costs, because that would probably mean we're putting our own cushy web coding jobs on the line, too.

> battery life

There are still some hard limits we have to take into account. Even if I for some reason did have to work with both programming and Photoshop on a 3 hour plane ride (I luckily do not), I don't think the answer to my problems would be to switch to "ed".


>> bandwidth

>Why would I download a text editor every time I wanted to use it?

To give a serious answer: The web is literally that. And I suspect the success of the web is in a major way due to Windows' lack of streamlined package manager, as the installation ("loading") of a webpage is about a second, whereas the install time of an average Windows program involved potentially several minutes and clicking "next" several times.

Fast install times helps with discoverability - if you can click a link and "install" a browser in under a second, it becomes pretty trivial to try out ten browsers in under two minutes and encourages experimenting. Plus, you'll have reasonable expectation that you won't have to spend time cleaning up the cruft left by unwanted IDEs you don't plan to use.

Also, the install time issue also applies to updates. Web pages mostly don't have an "updating..." loading popup like e.g. Steam has every time there's an update.

Note: I am NOT advocating we all switch to web browsers. I AM advocating that we try to make our package managers a couple of orders of magnitude faster.


I am very seldom in a situation where I must download my IDE while traveling in a rural area, run it with Photoshop simultaneously, or use it for the entirety of a 3-hour plane ride.

While multi-hundred-megabyte text editors consuming double digit CPU to render some text are definitely a sign of inefficiencies _somewhere_, I value any marginal productivity benefits from these additional features over (possibly significant!) usability in very resource constrained situations.


That text editor quote really bugs me. Sure my Atari 800 had a word processor that came on a 8k cartridge. I couldn't type Chinese, Japanese, Korean, Russian, Arabic into. I could only type ASCII. Just the ability to do that alone would likely entail many many megabytes of code.

For one, there is no text based system like those old machines that handles all that so I have to switch to graphics. Just the font alone for all of those languages will be multi-mega bytes and I'll need multi-megabytes more for space to rasterize some portion of those fonts. Rasterising just 4 or 5 glyphs is more work than that entire 8k word processor had to do on its 40x24 text screen.

Then for each language I'll need multi-megabytes for handling the input methods. The input methods are complex and will likely need a windowing system to display their various options and UX so add more code for that.

The point being that we need the complexity. That 8k editor wasn't useful in the same sense as our stuff today. I don't know a good analogy. It's like complaining that people use CNC machines today and at one point got by with a hammer and chisel. I'm not going back.


I think actual bloat is better measured by comparing the size and speed of a program with a few of its previous versions and look at the number of new features. Outlook strikes me as a suitable candidate: I can't think of anyone using it any differently today than say 10 years ago.

I even think comparing with something that is 20 years old today is more interesting than comparing a 1995 IDE to a line editor written for teletypes.


Outlook strikes me as a suitable candidate

The web version of Outlook is a great example --- on a computer only a year old, it often lags on every keypress when writing an email.

YouTube's redesign ("Polymer") is another example, where the new site is much slower than before, despite not really increasing in features.


Just the ability to do that alone would likely entail many many megabytes of code.

Just the font alone for all of those languages will be multi-mega bytes and I'll need multi-megabytes more for space to rasterize some portion of those fonts.

Then for each language I'll need multi-megabytes for handling the input methods.

Those statements clearly show your lack of awareness of what things were really like 40 years ago. They had CJK input and output(https://en.wikipedia.org/wiki/Cangjie_input_method was invented in 1976, for example) on the systems of the time, and that certainly did not entail "megabytes of code".

What it did entail, however, was a certain amount of skill, creativity, and an appreciation for efficiency and detail that lead to being able to do it with the hardware of the time, skills which are unfortunately a rarity today. Instead, we are drowning in a sea of programmers who think the simplest of tasks somehow requires orders of magnitude more resources than were available decades ago, when the reality is that there existed software at the time able to do those tasks perfectly well and at a decent speed.

The point being that we need the complexity.

The point is precisely that we don't.


While parent's "many many megabytes" might be an overstatement, today's editors are expected to display any number of languages and alphabets simultaneously, using user-configurable, scalable, variable-width fonts that render in a variety of different resolutions with sub-pixel smoothing. Things like that add to the complexity of both the OS and the application itself.

All that “bloat” enables us to do (much) more with less developer hours.

The balance is the same as ever, developer time vs. compute power. As compute power gets ever cheaper, and developer time still costs the same, we put bigger burdens on the machine to save developer time.

In other words, simple economics.


I don't think that developer time is the only metric here.

Yes, every project can get to market or to the next release with a startup company's speed, but you trade-off and generally get "startup quality" that way. That is the actual point.

This industry is so fast-moving that the costs associated with that level of quality is often paid later by other people. There are plenty of examples of this.


Software speed is a usability feature, and usability is a subset of economic value.

It turns out, people are perfectly happy to put up with slightly slower software for all of the other benefits we get from modern software: rapid development, rapid deployability, ease of code comprehension, and more.

When users complain about slow software enough to buy a different product (in whatever variation of 'buy' that may be) then it becomes a high priority thing to fix. Software today is precisely as fast as it needs to be- and no more, typically.

To me, this article is just the software engineer's version of Grandpa complaining that things were better in the old days- he's ignoring the reasons why things changed.


>people are perfectly happy to put up with slightly slower software for all of the other benefits we get from modern software

I don't think this is true. I think this is more of a 'boiling frog' situation. Increase the temperature one degree at a time and it won't jump out of the pot.

Every individual piece of software slowly eating up more resources is something the user barely notices or doesn't even attribute to the software in question, but give it a few years and everything grinds to a halt, and people very much dislike this, hence the infamous 'Windows rot' that everyone has suffered from at one point of their lives.


> 1995 was the year in which programmers stopped thinking about the quality of their programs.

Oh, please. There are some good points in the article but that hyperbole was unnecessary. I was programming in 1995, and nobody stopped thinking about the quality of their software.


Proliferation of hyperboles has gone out of control, hard to read news, and watch YouTube videos without writers/content creators, or even normal people speak in daily life without this very annoying pattern. I think problem is worse among our American cousins, than elsewhere in the English speaking world.

> Proliferation of hyperboles has gone out of control,

:-)


I’ve been programming since 1965, read Dijkstra, Niklaus Wirth. And these days I break software at people’s request. There is in fact much less concern and emphasis on quality these days.

It is also interesting to note that Knuth also does not do libraries


Can you elaborate on the Knuth thing?

Reading his TAoCP you can see it in action.

The wonderful NY times article https://www.nytimes.com/2018/12/17/science/donald-knuth-comp... talks about this, with a quite from Norvig:

In those early days, he worked close to the machine, writing “in the raw,” tinkering with the zeros and ones.

“Knuth made it clear that the system could actually be understood all the way down to the machine code level,” said Dr. Norvig. Nowadays, of course, with algorithms masterminding (and undermining) our very existence, the average programmer no longer has time to manipulate the binary muck, and works instead with hierarchies of abstraction, layers upon layers of code — and often with chains of code borrowed from code libraries. But an elite class of engineers occasionally still does the deep dive.

“Here at Google, sometimes we just throw stuff together,” Dr. Norvig said, during a meeting of the Google Trips team, in Mountain View, Calif. “But other times, if you’re serving billions of users, it’s important to do that efficiently. A 10-per-cent improvement in efficiency can work out to billions of dollars, and in order to get that last level of efficiency, you have to understand what’s going on all the way down.”


I'd say it's probably more a factor of the _requirements_ of the software getting harder. Pre 1995 most developers were writing apps running on a desktop that wasn't even multitasking. Post 1995 you have preemptive multitasking, TCP/IP / GUIs / multi platform / search indexes, etc. Most don't have the brain power to be experts in all those fields, hence you have to pick where you can excel in quality.

Sorry, I don't buy that. In 1970, I was writing multi-tasking assembler language code to real-time data collection of medical EKG data, feeding what we today would call an expert system or even AI to generate an English language cardiology report and sending it back to the hospital, first via paper tape carried over to ASR 33s, laater by dialout (where each character generated an interrupt). We gave ten minute turnaround.

I cringe today when programmers struggle with async processes in C or other languages, or async/await. We (Michael Whinihan and I) developed a dead-simple pattern using co-routines that vastly simplified interrupt driven programming. It is as if folks these days haven't read https://www.amazon.com/Operating-Principles-Prentice-Hall-Au...

I did a significant fraction of the work to build this, I wasn't the smartest guy in the outfit either. (Probably the most smart-aleck.) Check out one of the team members: https://en.wikipedia.org/wiki/Fibonacci_nim.

None of us were experts when we started this. We figured all the parts out and made it work, reliably. We had on the order of a small integer numbers of hours of outages per year. This was before Tandem Computers was born.

Now serving the medical community puts some pretty stringent requirements on what you build and how you operate it.


> I’ve been programming since 1965, read Dijkstra, Niklaus Wirth.

Well sir, you have me by ten years, and believe me I respect that extra decade. That said, I think you're falling into the trap many of us greybeards tumble into as we get older: the things we cared about and focused on become, in our minds, the right and true perspective that has been lost or disregarded by younger practitioners. It's really not very different from "When I was your age I walked to school every day, uphill, both ways!" Or maybe it's just me.

Yes, there are far fewer programmers hand optimizing assembly code, and yes every programmer now cobbles together applications by reusing code written by other programmers, some of which is very good, and some of which is not. But if programmers were still spending their time optimizing low level loops and eschewing any code that they did not personally write and verify the beauty of, the world we have today would not exist. Instead of zooming with my family in the middle of a pandemic we'd be exchanging emails, or posting on a BBS to say "Hi!" There's obviously still a need and role for people who like to work at that level, and that kind of engineering remains fascinating (one of the reasons I love to read the linux kernel mailing list), but I don't bemoan the rise of high level languages, libraries, package management ecosystems, frameworks and the like. That stuff has given us the world we have today, and a few warts notwithstanding I still like it much better than the one we had.


Whippersnapper. I grew up in Montana back when winters were severe. I did walk to school—about 25 feet as my parents drove me.

None of what you seem to think about my rant represents what my position is. Read my other note in my thread about the NYT article concerning Knuth and Norvig’s commentary. There is a time today for total deep dive. There is skill involved to do this and wisdom when to do this.

There are folks who write with minimal libraries cf qmail.

What seems to be completely missing from today’s discourse about programming is something dijksata said about interrupts. Paraphrasing “If you don’t see the code on the page in front of you, you will make mistakes. “

Take a look at modern Java. Levels of abstraction in use require serious deep dive to truly know what is going on. There is a famous Node package issue where code that wasn’t even on your computer crashed a swath of applications.

Quality in the context of the article means the code is pleasing to read, doesn’t crash unexpectedly and doesn’t have side effects that you may only discover when Brian krebs emails you about your customer’s data ending up in some remote online flea market


Well if you have to pick a line, a point in time where the industry crossed a threshold, I'd say the author picked a good one.

There were people who didn't care about software quality in 1960 and there will be people who do in 2060, so no matter what year he chose, a lot of people would dismiss any chosen year as hyperbole.

As time moves forward, and more "healthy" ecosystems grow up around languages (like unwanted mold or fungus always does) software is going to get slower and slower and slower. And saying that it's a problem will always be an "old man is shouting nonsense again" moment to most young developers.


It's like they never actually ran Windows 95 and had the experience of people accepting your operating system crashing daily. It was much more reliable than Windows 3.1 after all (and to be fair to MS, was also more reliable than MacOS System 7 and ran faster than OS2).

There's always been good software and bad software.


Windows particularly made terrible software acceptable. Windows really was an utterly unforgivable PoS compared to DR's Gem + TOS on the Atari and AmigaOS (multitasking!) on the Amiga - both of which were already around in the mid-80s.

It really took incredible hubris to sell Win95 as a huge step forward for the industry when it was so hugely inferior in quality to an OS that had been hacked together by a small team of best-in-world hardware and software designers ten years earlier.

But it did succeed in one area - which was doing a great job of lowering consumer expectations and making the most appalling carelessness and incompetence seem like a boon from the tech gods.


Incentives are often a big part of the problem. In many cases, people are writing software that will run on someone else's hardware. This applies to basically all front-end web development, all mobile apps, and all software licensed to someone else.

That means the programmer (or their employer) doesn't pay for the electricity it uses (or battery it drains), the RAM it allocates, the disk space it wastes, or the hardware upgrades necessary to make it run acceptably.

Why conserve a resource that you're not paying for? Especially if you have to expend your own resources to do it.

I'm not defending crappy software (nor, apparently, offering a solution), but if a programmer's personal sense of honor is the only weapon in this fight, then it's not a great formula for winning the fight.


Another solution would be to use the proper, statically and strictly typed programming languages with AOT compilation. Rust, Swift, Kotlin - these are the best of production-level languages that can produce both native code and WebAssembly. There are other, similar languages as well. They key point - when compiler knows more about your intentions (e.g. strict and sound type system) it can optimize it better. Especially with LTO or PGO.

One only has to look at the demoscene to realise that limitations inspire creativity and efficiency. It's just a form of art, but gives a glimpse at what computers are capable of, if only we would try to use them more efficiently. I think award-winning 4k or 64k demos should be required watching for every software developer. Here are some personal favourites which have also appeared on HN:

https://news.ycombinator.com/item?id=11848097

https://news.ycombinator.com/item?id=14409210

https://www.youtube.com/watch?v=Y3n3c_8Nn2Y



> 1995 was the year in which programmers stopped thinking about the quality of their programs.

-yawn-

Prove it.

The number of books and articles about the quality of software engineering published in the past 25 years certainly seems to prove we care a great deal about code quality. Probably more than we ever did prior to 1995 when Pascal, BASIC, FORTAN, COBOL, and self-modifying assembly code were being taught in colleges.

As for the point of the article, that hardware doesn't accelerate as fast as programs using it, same challenge: prove it.

This article is all handwaving and speculation with no data to support any of its claims.


The central conceit being we are all doomed as software becomes more bloated and less efficient while processing power remains at it's current levels?

While Moore's law may be over in terms of number of transistors per chip, we are still seeing growth in the number of cores per CPU, not to mention faster memory and the death of spindle storage.

ImO software will start leaning more heavily towards parallelesim (if it hasn't already, building a threaded or asynchronous application is no longer an arcane dark art). Couple that with the surge in adoption of distributed architecture (yes microservices), and the emergence of languages which encourage pragmatism and efficiency (ala Go and Rust) and I think we'll be okay for the next while. Can't speak to electron though (gross).


>A good way to start would be to split up libraries. Instead of creating one big library that does everything you could ever possibly need, just create many libraries. Your god-like library could still exist, but solely as a wrapper.

This isn't it. It's literally what is creating the problem. Small libraries mean duplication, a lack of shared abstractions and dependency hell. The reason garbage collection was such a huge win is that before that every C library shipped with it's own completely different way of managing memory, that's a nightmare.

We need bigger libraries providing better abstractions that are reused through out and written to allow composition and dead code elimination('tree shaking' for the JS crowd) to work well. So you can opt in to functionality you need and opt out of functionality you don't need.

Communities working on big libraries can actually do good release planning, backwards compatibility and timely deprecations. React is a example of a library that does a really good job of this. All the smaller libraries in the JS ecosystem are the cause of the pain in modern JS development.


I think we should continually fold new libraries into the language. Basically "batteries included", collapse the dependency tree.

I found that the standard python library has enormous functionality that makes it possible to solve many problems in a self-contained fashion.

This sort of solution will give you a big toolbox spread out in front of you. You will be less likely to re-invent the wheel. Many eyes on the standard library may lead to optimizations used by everyone.

And it makes it possible to share effectively. You can talk about functions with others using common terms. You can share your code and it will work in other environments. Education can teach in an unambiguous fashion.


It’s true that lots of software is slower than it should be. It’s also true that software does a lot more than it used to. What’s not true is that abstraction is the reason for slow software, at least not on the order of magnitude that the article claims. Even in the worst case, the cost of language level abstractions is well outstripped by advances in processing power.

I’m not actually sure we really even have “slow” software. At least not relative to how much that software can do. Latency is a different story.


He was right in 1995 and he still is. Bucky, as he was called at Standford, is one of founding generation of computer science that is still alive. Being 4 years older than Knuth he might even be the oldest one.

Modern software is doing orders of magnitude more than the software of the past. The feature set of a word processor in 1980 is a rounding error compared to the 2020 word processor’s print dialog.

The OP’s viewpoint is akin to claiming that aviation has not advanced beyond Kitty Hawk because modern aircraft are simply wasting fuel when they take to the skies. It doesn’t take into account the huge differences in the modern computing environment.

The computers of the recent past weren’t even powerful enough to encrypt a modern TLS session. Does that mean using all that power to encrypt the session today is “bloat”? Of course not. One person’s bloat is another person’s feature. Note that sometimes that feature is that the software can actually exist and be maintained on your preferred platform.

If you think about it, it would be an utterly bleak future if we were just running the exact same software stack on faster and faster hardware. That would represent stagnation, not progress.


I think you'd be surprised how few extra features Word in Office 365 has compared to WordStar 2000 considering the forty year gap. And also how poorly designed, inelegant, and - yes - pointlessly bloated Word feels in comparison.

The biggest differences are screen size - full document vs a few lines of text - and support for colours and outlining. Also, PDFs, which weren't a thing in the early 80s.

But the core features in WS2000 are a lot more than a "rounding error" compared to Word.


Sniff sniff. What's that I smell.

Gatekeeping bullshit? The scent is unmistakable.


>"Wirth’s law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.

https://en.wikipedia.org/wiki/Wirth%27s_law"

[...]

>"Niklaus Wirth, the designer of Pascal, wrote an article in 1995:

About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much!) An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary. Were it not for a thousand times faster hardware, modern software would be utterly unusable.

Niklaus Wirth – A Plea for Lean Software"

[...]

>"Time pressure is probably the foremost reason behind the emergence of bulky software.

Niklaus Wirth – A Plea for Lean Software

And while that was true back in 1995, that is no longer the most important factor. We now have to deal with a much bigger problem: abstraction. Developers never built things from scratch, and that has never been a problem, but now they have also become lazy."

[...]

"The problem does not seem that big, but try to grasp what is happening here. In another tutorial that Nikola wrote, he built a simple todo-list. It works in your browser with HTML and Javascript. How many dependencies did he use? 13,000.

These numbers are insane, but this problem will only keep increasing. As new, very useful libraries keep being built, the number of dependencies per project will keep growing as well.

That means that the problem Niklaus was warning us about in 1995, only gets bigger over time."

Related: "The Law Of Leaky Abstractions" by Joel Spolsky: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...


There are a lot of these type of articles that conclude that "we", the developers, need to fix this problem. But if you read the original article by Prof. Wirth, it is not up to "us".

It is the reality of the software industry. Time-to-market is the only rule. and it dictates how we write software: Agile, Scrum, don't reinvent, deploy early, fix it later, technical debt.

There are many of us who care about quality of our work. But in performance reviews, the only thing that matters is how many features you shipped. You cannot demonstrate the quality of your work in your performance review. Or better yet, the fact that the software you wrote is going to be bug-free for next 10 years.


Yeah, you will always keep writing crappy software while your hardware will also keep wiping your ass

I think this post has some validity to its main points, but I personally think it veers on the edge of "real programmers do X" and reels of rose tilted nostalgia.

Legal | privacy