> The problem with this saying is that many people wrongly interpret it as “early optimization is the root of all evil.” In fact, writing fast software from the start can be hugely beneficial.
We don't have to speculate, the paper is online[1] and he is very explicit about what he meant.
The paper is, in fact, an ode to optimisation and the necessity of optimisation, including, very specifically, micro-optimisation. The "root of all evil..." part is a "yes, but". You actually have to leave out much of the actual sentence in order to strip it of its meaning:
"We should forget about small efficiencies, say about 97% of the time, premature optimization is the root of all evil."
If that wasn't clear, the very next sentence is as follows:
"Yet we should not pass up our opportunities in that critical 3%"
A little later in the paper:
"The conventional wisdom [..] calls for ignoring efficiency in the small; but I believe this is simply an overreaction [..] In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering."
> I assume given the target audience for the advice, the audience is assumed to know that you can't ignore performance
That's why it's bad advice! Perhaps I should rephrase: it's not incorrect advice (to the right target audience), but it most definitely is inadvisable advice.
Who is the audience for the famous "premature optimisation" quote? Maybe originally it was "people who know you shouldn't ignore performance", but it definitely isn't now.
Easily misinterpreted advice is bad advice in my book.
> The metaphor we use for optimization is “low hanging fruit” which no orchard owner would ever do.
This is just over-extending the metaphor.
The term has existed long before software was a thing, and refers simply to grabbing something that's easy. That's it.
The same goes for tree-shaking. The author does the same thing - over-extends the metaphor. Tree shaking simply means giving everything a jiggle and seeing what comes loose. It's easy to understand and shouldn't be read into any more than that.
> It's perfectly valid to trade off correctness for gains elsewehere.
Agreed, but no-one is disputing that, right? The problem is that very often saying correctness was given up in order to gain velocity and simplicity is simply a lie. Incorrectness can easily lead to slow development and tons of complexity.
It’s only fun when the trade-off is actually a trade-off.
> Performance is never a justification for anything if you haven’t measured it
A "justification" implies that there is a compromise being made in the name of performance. It's very reasonable to choose, in the first iteration of a program, to choose to disregard performance concerns in favor of readability, which is exactly the context of how they framed it.
When you make the big initial decisions based on readability, that informs all the rest of your choices in working on that piece of code. If you skip a few steps ahead because e.g. "we're making some extra database calls here, so we should grab all the data we need the first time instead", you'll find it much more difficult to optimize your way back to the most readable code possible.
Is it the only way to write software? Of course not. And it may make more sense for the systems and frameworks they use than for yours. But I for one admire their willingness to have a point of view and a principle when it comes to writing code. I don't think it's madness. And I don't think you're wrong either. I just don't think you would have been a good fit for their team. By putting their philosophy out there, they help make that clear. And likewise! I don't begrudge your philosophy, and I think it's often correct. But please don't accuse others of drama and clickbait for describing strong opinions.
> Is the author expect to convince anyone who doesn't already agree with him?
Not really. If I wanted to convince skeptics, I would have to make it much longer and more precise, and then it be a very different kind of thing. Maybe I'll do that some other day.
The connections between the sections are fairly obscure, I admit. But I think many "universal truths of software development" are instances of more general problems to do with collective decision-making and local optimisation processes, which also produce the problems the last section mentions. I don't have a solution, but in the meantime I'd like to avoid making things worse.
>I'm not sure how you argue against what seems like pretty objective data.
I commend you for crystallizing the argument which I think is indeed representative of what the industry believes about itself and many others reject.
Virtually everything in computing, or the real world, for that matter, is solving some sort of optimization problem, explicitly or implicitly.
I'd paraphrase your comments as, Facebook solves an optimization problem, they make billions of dollars doing it, so it is objectively the optimum.
There are two objections I think that capture most of the opposition.
- Real optimizers can't promise a true optimum every time; they sometimes converge on a local optimum. This local optimum may not be within any particular distance from the global optimum either.
- "Fewer things you just scroll past" is not necessarily what the *user* wants to optimize. It is not in fact what I want to optimize. I'm not sure if I made it clear, but the *existence* of things that I don't actually read is important contextual information. So is the relative order, the timing, the source...
These points are really abstract and general - virtually anything can have these two issues - they're not optimizing the right thing, and they're not finding the global optimum.
I think non technical people sometimes get angry because they intuitively feel that people claim objectivity without justification, but I think it's possible to provide a bloodless, abstract, logical, and specific critique more suitable for a software engineer's mentality, and that's what I tried to do above.
[One other general issue I thought of - optimizers can have the wrong constraints; everything in life normally has rules and limits on how you can pursue a goal. The ends don't necessarily justify the means. So this is another thing that is not covered by simply saying the optimizer produces the optimum]
> I guess my argument is about letting the perfect be the enemy of the good
I see this as more when you're creating something. Like if you're writing software if it is good enough then don't continue if you have other pressing matters. Essentially this saying is about Pareto.
But when you already have things invented and you're just selecting things, this saying doesn't apply. We have 3 options: option 1 sucks, option 2 is meh, option 3 is pretty good. Which do you choose? It is pretty obvious that you should choose option 3. There's no other conditions on this problem. You don't have to spend more resources to get option 3. Option 3 is just objectively better, so why not pick it?
And if you think I'm exaggerating, I'll note that IRV is a 6% VSE improvement on plurality. STAR and RP are both 15% improvements on plurality. This is part of why people are frustrated. There's options that are objectively better, so why pick the worse one? And it isn't like they are differing in "betterness" by small amounts, we're talking about a pretty big amount!
You're calling 6% "vastly better" so doesn't that make 15% "astronomically better?"
I read it as the opposite: he's calling out the educated programmers' arrogance for thinking that they have already figured things out because they know which algorithms are theoretically optimal, and that they don't need to bother looking at the actual use-case.
And he's not putting himself above falling into that trap either:
> So why are you, and I, still doing it wrong?
Also, this is an article that was published in an ACM magazine, under the header of "The Bike Shed" - which I would assume is a "provocative debate"-type of column. Keep both that target audience and intent behind the article in mind when evaluating the overall tone of it.
> People seem eager to find some magic, half-hearted shortcut to reaching certain goals. There really aren't any. The best advice is often the kind you don't want to hear.
Often yes, and in this case probably especially so, but I'd like to make a general comment here as I hear this sentiment repeated in many other situations: in general, this is not true. The entire technological progress of humanity is based on this simple assumption: there must be a better / faster / magic way to do this, even a half-hearted shortcut, so that we do not have to work much to get what we want. That's why we invented... pretty much everything. We can call it "looking for half-hearted shortcuts" or "optimization", but it's the same process.
> As part of that stack, we chose MongoDB + Mongoose ORM because the combination presented least overhead and allowed us to ship quality features quickly. As Sir Tony Hoare states, “premature optimization is the root of all evil,” and there was certainly no need for further optimization at the time.
I find it interesting that one sentence claims they made an optimal choice for feature delivery speed, and the next one that they rationalized it as "non optimal as prescribed by Hoare". Never mind the fact that three sentences down the original quote Hoare said "It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail."
> Surprisingly, the root cause of bad software has less to do with specific engineering choices, and more to do with how development projects are managed. The worst software projects often proceed in a very particular way:
> The project owners start out wanting to build a specific solution and never explicitly identify the problem they are trying to solve. ...
At this point, it looks like the article will reveal specific techniques for problem identification. Instead, it wraps this nugget in a lasagna of other stuff (hiring good developers, software reuse, the value of iteration), without explicitly keeping the main idea in the spotlight at all times.
Take the first sentences in the section "Reusing Software Lets You Build Good Things Quickly":
> Software is easy to copy. At a mechanical level, lines of code can literally be copied and pasted onto another computer. ...
By the time the author has finished talking about open source and cloud computing, it's easy to have forgotten the promise the article seemed to make: teaching you how to identify the problem to be solved.
The section returns to this idea in the last paragraph, but by then it's too little too late:
> You cannot make technological progress if all your time is spent on rebuilding existing technology. Software engineering is about building automated systems, and one of the first things that gets automated away is routine software engineering work. The point is to understand what the right systems to reuse are, how to customise them to fit your unique requirements, and fixing novel problems discovered along the way.
I would re-write this section by starting with a sentence that clearly states the goal - something like:
"Paradoxically, identifying a software problem will require your team to write software. But the software you write early will be quite different than the software you put into production. Your first software iteration will be a guess, more or less, designed to elicit feedback from your target audience and will deliberately built in great haste. Later iterations will solve the real problem you uncover and will emphasize quality. Still, you cannot make technical progress, particularly at the crucial fact-gathering stage, if all your time is spent on rebuilding existing technology. Fortunately, there are two powerful sources of prefabricated software you can draw from: open source and cloud computing."
The remainder of the section would then give specific examples, and skip the weirdly simpleminded introductory talk.
More problematically, though, the article lacks an overview of the process the author will be teaching. Its lack makes the remaining discussion even harder to follow. I'll admit to guessing the author's intent for the section above.
Unfortunately, the entire article is structured so as to prevent the main message ("find the problem first") from getting through. As a result, the reader is left without any specific action to take today. S/he might feel good after having read the article, but won't be able to turn the author's clear experience with the topic into something that prevents more bad software from entering the world.
>This leads one of the major cognitive biases in software development where people speak of tradeoffs without establishing they are Pareto Optimal, and wrongly arriving at the conclusion “nothing more can be done without a huge effort”.
Well, part of the idea is to do the "pareto optimal" work not just implementation-wise, but also in checking whether the tradeoffs are pareto optimal (pareto-ception). Thus, avoid "analysis paralysis"
> This author starts with the implicit premise that change is bad and all conclusions are the result of that.
Where in the article do you draw this conclusion from?
Change isn't bad. The article doesn't say change is bad. Change introduces complexities. And unclear or ill-defined objectives (aka, requirements) make change more likely.
This is addressed in the third paragraph:
> The problem with this saying is that many people wrongly interpret it as “early optimization is the root of all evil.” In fact, writing fast software from the start can be hugely beneficial.
reply