Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

"The whole discipline Rachel writes about is clearly intended for mature, scaled operations where outages..."

That is not true. RotB is describing saftware inefficiencies that we learnt to do without 20 years plus ago.

Because the Python hackers that built all these tools did not pay attention, when they built new tools they recreated the old problems.

Worik's 23.6918th rule of creativity: It is easier to write than read.



sort by: page size:

I don’t think you guys are saying different thing here.

The article in this case is describing a bunch of common processes/optimisations/features that we have learnt to be critical for effective and efficient running of software. The author does this because the audience she writes for is, as the previous comment puts it “mature, scaled operations where outages...” etc etc


> Critical thinking is required to come up with a solution to a unique, never solved before, problem.

1. Perhaps I'm being slightly too liberal with my use of the word rote. But by "rote programming", I mean the sort that requires no understanding of anything beyond CS1 concepts, the domain and your toolset. The programs are very simple and sometimes hacky, but get the job done (e.g. "Move all those files over there and then watermark them using this snippet I found on SO and then add them to the database so they show up on the website").

2. Even for many professional software engineers, often times most of the critical thinking is probably about the domain as opposed to the programming. Most people aren't (or shouldn't be) coming up with truly unique algorithms/approaches; rather, they are mapping new problems to tried-and-true techniques usually already implemented as libraries.

So it's possible to solve new problems (in the sense of your domain) with rote programming (of the sort described above), if all the critical thinking is about modeling the domain as opposed to the software.

Of course, every once in while, the "tried and true" technique is a piece of CS that only an educated engineer has heard about/knows how to correctly use. But my point was that there are lots of problems you can solve without getting to this level.


> The core principle of computer-science is that we need to live with legacy, not abandon it.

I feel like the author misunderstands what a "core principle" is. This here is (at most) a bitter lesson learned from several decades of experience. It's not a principle, it's at best a conclusion following multiple stories of getting rid of legacy systems, but even as that this is highly dubious.


The main theme of the article was: should I start with stable old tech or go to unstable new tech right away?

First the author links to an article called "the Fallacies of Enterprise Computing" that I honestly didn't read but it's about debunking a fallacy "New technology is always better than old technology". Then he uses the case of python3 vs python2 for making the case that a lot of people worked successfully with python2 while python3 was unproven and finishes with a non-sequitur question: so why should stable old tech be recommended and used [in face of unstable new technology]?

He argues in favour of old tech because, 1. new tech will be buggy, 2. you can't wait for the new version to be stable, 3. you will have to rewrite your project anyway to a new version some day, 4. technical debt is inevitable and not that bad.

His conclusion is to develop with an old stable version and upgrade to a new version when the new version gets stable.

Except for the first point, the one about the bugs, I'm not even sure that points 2, 3 and 4 even help his case. In fact I could easily use them to argue in favour of starting a project right away with unstable new tech.

I might be a dumb guy and I might not have understood entirely the point of the author, but I don't like this kind of article. For me it all looks an incoherent assemblage of phrase of effects crafted with the intent of being provocative rather than having some real content. Props to the author for putting the effort of writing an article, but that's all of positive that I can say about it, the rest sounds only like platitudes.


I found the example given very contrived and the whole article slightly disingenuous because of it.

Brooks was talking about building long-living applications serving complex business requirements. The article talks about a one off script with no ongoing life. It's not even really "software" at all in the terms Brooks was describing.

It's like someone describing absolute limits on fuel efficiency for vehicles and then I get on a scooter and coast down a hill and jump off at the end so that it crashes into a wall and explodes, and using this example to claim they are wrong.


This essay badly misinterprets Brooks original essay. Brooks was talking specifically about the problem of building new systems from scratch.He was speaking about programming productivity. This essay is talking about operational tasks. Of course operational tasks can be speed up with improvements to software and hardware, that's the whole point of writing new software and developing new hardware.

Furthermore, Brooks doesn't actually argue that there is no way to tackle essential complexity when writing new software. What he says is that there is no single technique that will speed up the process of writing code across the board, that is, there is no silver bullet. However, he does make allowance for techniques that allow us to break up complex systems into smaller systems which can then be solved independently. He specifically mentions object oriented programming as a tool which makes it easier for programmers to write software as components but CPAN would have been a better example. He just doesn't allow that OOP by itself would allow for an order of magnitute improvement (something which I think that experience has born out.)

The author of this essay, points out a couple of operational tasks which involve programming were easier for them than they would have been 34 years ago because faster hardware noow exists and because software that already takes care of a subset of the problem now exists when it didn't before. Partially, this is confirmation of Brooks' idea that breaking down problems it the way to go long-term, but more critically, it's a failure to realize on the part of the author that they're speaking of productivity in their role as the user of a computer system whereas Brooks is speaking about productivity in designing new software products.

edit:

I think that the fundamental root of the essay's problem is that the author seems to have misssed what Brooks was referring to which he discussed accidental vs essential complexity. Brooks specifically pointed out that he was referencing Aristotle when he used those terms, and in this case essential doesn't mean unavoidable, but a part of the essential character of the problem itself. A more modern terminology would be intrinsic vs extrinsic. Complexity is extrinsic to programming if it comes from something other than the nature of the problem itself. If my computer is too slow to allow me to write a program, then that's probably not due the the nature of the program itself (except indirectly) and so represents a form of accidental complexity. Feature requirements however, represent essential complexity. You can manage essential complexity according to Brooks by dividing the problem up or avoiding altogether, but it's not something that will go away simply with better tooling or a better methodology, (unless that better tooling replaces software that you otherwise would have written.


It's obviously untrue for 'any two technologies'. And I don't know about you but I like to assume that if the author's 'real message' was 'don't rewrite working code from scratch' he'd have simply said that. Are you suggesting he put the effort into summarizing his particular experience while having some secret and entirely different message? What makes you believe that?

> What is at risk by not allowing developers to "learn from mistakes" is autonomy. Striping developers of their autonomy is the primary cause of poor performance, not an inability to execute so-called "best practices"

I've seen a lot of the opposite. Yes, coding is a design practice, but I've had to clean up a lot of messes resulting from just plain bad design because nobody involved - generally ~25 year olds with very little experience out of school - knew that there were lessons from the past they could learn about what designs would and wouldn't work.

I agree with you that programming is an endeavor that benefits from experience, and wish that people would realize that means they can learn from the experience of others. Sure, intuition is involved too, but one common thing I've seen in shitty code I've had to salvage is that people often don't apply their intuition to "how could this code fail" or "how easy will this be to modify in the future"?

That said... taking a look at this book... I don't see much in the description or table of contents that would teach those folks whose work I'm decrying above much useful about writing good software. It has sections on reliability, project estimation, and development methodology as separate things - plus a lot of non-software-design stuff. But to me, the flow is different - estimation, reliability, and delivery will all suffer if you don't have the right fundamental design skills. You can't get much better at any of those without some deeper underlying changes.

It seems to have a lot of discussion of studies adjacent to software-related things, but I'm not sold on them saying much meaningful about software design.


> Mistake 1: Switch from DRY to premature optimization.

Fallacy: False Dichotomy and No True Scotsman.

"Things are either DRY or premature optimization and can't be both"

"No TRUE application of DRY would ever be a premature optimization"


> Who is the author arguing against?

This is addressed in the third paragraph:

> The problem with this saying is that many people wrongly interpret it as “early optimization is the root of all evil.” In fact, writing fast software from the start can be hugely beneficial.


Author may have left out the part where his co-workers said, "no one does it this way [because it's usually a bad approach using this technology, and your particular use-case isn't some special case where over-engineered and brittle inheritance is actually a good idea.]"

> In "Out of the Tar Pit", Mosely and Marks argue that today there is so much incidental complexity (from poorly designed or ill-fitting tools) slowing down our work, that removing it would result in those 10x gains Brooks argues can't be achieved in NSB.

Just to clarify for others, Brooks' paper does not say that there will not or cannot be 10x (order of magnitude) improvements in programming.

He made two principle statements in his paper:

1. That there will not be a 2x improvement in programming every 2 years (comparable to Moore's Law about the increasing density in transistors that was roughly every 18-24 months). That isn't to say that there won't be 2x improvements in some 2 year periods, but there will not be consistent 2x improvements every 2 years.

2. That within a decade of 1987 (that is, a period ending in 1997) there would not be a 10x (order of magnitude) improvement in programming (reliability, productivity, etc.) from any single thing.

So trying to refute Brooks' 2nd assertion, what lots of people try to do, by looking at changes post 1997 is an absurdity as it ignores what he actually said and the context of it.


I think she inadvertently made a different point, which is that even experienced developers sometimes misunderstand the problem and make mistakes.

Or an even better argument: you don't need to actually understand the problem to fix it, often you accidentally fix the problem just by using a different approach.


Might be wrong, but I feel like you may have a vested interest and hence read the article as more of an attack than it really was.

To me it reads more like "I understood the problem space better during the rewrite so I could solve things in more apppropriate ways with a the tools at hand."


I read the article.

It’s bad advice.

Here’s a famous paper on why:

http://www.sympoetic.net/Managing_Complexity/complexity_file...


"This article has a good premise, but poor execution."

Makes sense, but I take the view the premise is flawed. Its one of those "oh, that seems so obvious" type premises that describe the skin of the onion but fails on pulling back the layers. It seems to happen a lot while dealing with complex systems. The fun part of this one is that it seems like they don't realize they are dealing with a complex system. The illustrating pictures even show some cracks and the author skims over them.


> Surprisingly, the root cause of bad software has less to do with specific engineering choices, and more to do with how development projects are managed. The worst software projects often proceed in a very particular way:

> The project owners start out wanting to build a specific solution and never explicitly identify the problem they are trying to solve. ...

At this point, it looks like the article will reveal specific techniques for problem identification. Instead, it wraps this nugget in a lasagna of other stuff (hiring good developers, software reuse, the value of iteration), without explicitly keeping the main idea in the spotlight at all times.

Take the first sentences in the section "Reusing Software Lets You Build Good Things Quickly":

> Software is easy to copy. At a mechanical level, lines of code can literally be copied and pasted onto another computer. ...

By the time the author has finished talking about open source and cloud computing, it's easy to have forgotten the promise the article seemed to make: teaching you how to identify the problem to be solved.

The section returns to this idea in the last paragraph, but by then it's too little too late:

> You cannot make technological progress if all your time is spent on rebuilding existing technology. Software engineering is about building automated systems, and one of the first things that gets automated away is routine software engineering work. The point is to understand what the right systems to reuse are, how to customise them to fit your unique requirements, and fixing novel problems discovered along the way.

I would re-write this section by starting with a sentence that clearly states the goal - something like:

"Paradoxically, identifying a software problem will require your team to write software. But the software you write early will be quite different than the software you put into production. Your first software iteration will be a guess, more or less, designed to elicit feedback from your target audience and will deliberately built in great haste. Later iterations will solve the real problem you uncover and will emphasize quality. Still, you cannot make technical progress, particularly at the crucial fact-gathering stage, if all your time is spent on rebuilding existing technology. Fortunately, there are two powerful sources of prefabricated software you can draw from: open source and cloud computing."

The remainder of the section would then give specific examples, and skip the weirdly simpleminded introductory talk.

More problematically, though, the article lacks an overview of the process the author will be teaching. Its lack makes the remaining discussion even harder to follow. I'll admit to guessing the author's intent for the section above.

Unfortunately, the entire article is structured so as to prevent the main message ("find the problem first") from getting through. As a result, the reader is left without any specific action to take today. S/he might feel good after having read the article, but won't be able to turn the author's clear experience with the topic into something that prevents more bad software from entering the world.


> you should never abstract away code before you see the third duplication" has little to do with the article, and I'm also really not sure it's good advice

Absolutes like that are rarely good advice.


> Design the process from the bottom up

Bottom up IT system reimplementations fail much more often than incremental ones.

Bottom up reimplementations of operating human systems that also involve big bang IT implementations are even worse, by comparison to incremental evolutions, mixing incremental automation with incremental process improvement.

And it usually only takes one high-profile failure of a big-bang implementation to derail a broad reimplementation process, whereas the occasional incremental setback on an incremental improvement process is rarely politically significant to the overall process.

So, no, I disagree with your recommendation that this should instead have been done with a more waterfall-style approach.

next

Legal | privacy