You possibly don't now much about civil engineering, let me start out by writing that every project in civil engineering stands on it's own, it is the knowledge and the practices that get re-cycled from one project to the next and on a higher level from one generation of civil engineers to the next.
So it is extremely unlikely that civil engineering will be done twice for an exactly identical project. But no civil engineer is going to make his own table for bolt tensile or sheer strength just to scratch an itch.
Then we come to software, in software the same problems appear over and over again and we have in the 100's of possible solutions for some problems that are all almost (but not quite, of course due to sloppy design and lack of standardization) interchangeable.
Think web frameworks. They are themselves an attempt to abstract away some elements that are common, but in the meantime there are almost as many webframeworks as there are things that they were trying to abstract away to begin with (and all of them fail on one or more of the details). Or programming languages, another area in which we have re-invented the wheel so many times that the number of programming languages (1000+) will at some point probably exceed the number of human languages (about 6500 in active use, but about 2000 of those have < 1000 speakers).
> You possibly don't now much about civil engineering
I don't, so maybe it's not the best example.
But take houses for example, it's not common to see a dozen or more identical houses being built. After you've done 11, number 12 is not going to be much different.
But the foundations might be slightly different to account for terrain differences. And that will not cause the civil engineers involved to re-define what a bulldozer or backhoe can or should do and how it should work, nor will it change the process they will use to make that foundation.
In fact, the house sitting on top of the foundation will likely be the same project if they really are identical, just like copies of a piece of software, in other words, from an engineering perspective the job as far as the house is concerned was done with #1 if you want more than one house.
It's all about the tools, processes, materials, soil knowledge and so on that are being employed in order to solve the problem, not about what is actually built.
In the software equivalent there would be a new build process with associated terminology and tool obsoletion for almost every engineering project. Imagine workers coming to the job one morning to find that the tools they used the day before have now en-masse been declared obsolete and all their technology is slightly different, incompatible and un-tested. Then, a few weeks later at another job site it would be the same story all over. In the meantime, the engineers would be re-inventing engineering from first principles for every 5th job or so.
In other professions that would be called madness.
Software is a tool. What you do with it is an outcome.
A bulldozer is a tool. The house is the outcome.
So, we already invented a bulldozer (some piece of software) and it's trivial to duplicate it and reuse it all over the world. But when we need a new type of machine it's harder because, well, it's new.
In the software engineering world the goals is generally not to facsimile something like e.g. a house. It's to create a legitimately new tool. And since there are no real capital investments required (unlike building new tools in the physical world), and folks often value their free time at a weirdly low marginal rate, you end up with tool proliferation in a way not seen in the physical world, which makes it easy to think that we're building houses when in fact we're inventing the bulldozer.
More likely: to create a minor variation on an existing tool. There have been relatively few instances of legitimate new tool creation but many 100's (and probably 1000's) of instances of re-creating slight variations on the same tools.
For instance: build tools, programming languages, libraries (95% duplicate functionality with some other library, and of course an inconsistent interface) and so on.
> And since there are no real capital investments required (unlike building new tools in the physical world)
Software requires enormous amounts of capital, in part due to all this re-invention. We even have a name for it: NIH, and we have a term for what happens to a software project that is a few years old (technical debt).
Our tools and our processes are ill equipped to deal with the challenge and what is supposed to be building houses more often than not ends up with people re-inventing the bulldozer.
Now there are times when that is the right thing to do, but most of the times it isn't.
Slashdot, Reddit, and HN are different tools. On the surface they seem similar, but so does a Philips head screwdriver and a Frearson, yet they really are different tools.
They're different end products, but they are not different tools.
Conceivably they could have been built with the same toolchain, but instead they were all built with totally different tools (Perl, Lisp/Python, Arc) respectively.
Because in engineering there really are such requirements. In software we rarely have a good reason to start yet another 'from scratch' new hip thing that is so much better than that old thing (where 'old' is likely less than 5 years old). Technology cycles are now so short that libraries and frameworks don't even bother pretending to have a life-span longer than the stuff that will be built upon them.
And those engineers with their 'hundreds or even thousands kinds of screws', they very much favor standardization, lots of work goes into attempting to create families of compatible tools and consumables and typically a change like that will result in decades of stability in the fastener industry, it is quite rare for something revolutionary to happen, most of the changes are incremental and logical.
If you attempted to come up with a completely new thread or screw head there would have to be a reason better than 'I don't like the other screws' if you expect to gain any acceptance of you shiny new tech.
I'm trying to imagine a world where every other week the whole of engineering would be up-ended, everybody would have to totally re-train and we'd discard everything we learned process wise over the life-time of the industry.
That would be the rough equivalent of what we do in software. Ok, maybe not two weeks, but we might actually get there, life-cycles are getting ever shorter.
So if you stepped back 5000-10000 years in engineering, you'd notice that civil engineers... Basically did exactly that.
You're just being extremely unfair in comparing a field with ~10,000 years of development with one that has like... 100.
Everything you've called out as unreasonable is pretty much what every field, ever, has done when it first became a discipline, and your specific criticisms border on absurd.
Programming languages, for instance, are like materials: of course there's thousands, most of them are meant for research purposes, and the hundred or so used ones each represent different tradeoffs in base construction strategy. No different than the dozens of kinda of wood and concrete used by civil engineers.
Similarly, engineering of civil projects has some notable massive overruns on budget and complete design failures in even recent history. Their projects are bespoke and often use novel ideas that don't work out in practice.
The main "difference" is you're comparing a high assurance subset of one side to the general of the other, which is naturally quite unfair.
Want to compare the fields in full? I bet I can find a dozen bad handyman repair jobs for every bad JS framework or library.
But we already have engineering as an example of how to do something like this right, there is no reason to go through another 5000 or 10000 years to figure all this out once more.
And the handyman and the engineer have very little in common.
The reasons why you can name those examples is because they are the exceptions. If you tried to do the same for software the list would likely exceed HN's capability to store it.
You could say the same about construction projects that are 30+% over budget. Kitchen renovation IMO actually has more in common with software development than home building. Homes are generally built for generic people, kitchens are renovated to meet specific needs.
Civil engineering had thousands of years of development when Galloping Gertie happened. Why would their methodology work in other, different engineering fields when it fairly routinely doesn't work in its own? I mean, if you look at things like project failure percentage (and cost!) over time spent developing the field, you probably have software winning.
Software is developing in to an engineering profession at an astonishingly fast pace. It's just currently at the point where it's differentiating between tradesmen and engineers, and without that clear distinction, it's hard to compare apples-to-apples between fields.
> And the handyman and the engineer have very little in common.
Why not? You're lampooning software development for the fact that a lot of not-quite-professionals develop mixed results because they churn a lot of product onto the market. The damage done to house integrity all over the world by questionable repairmen estimating the engineering impact of a change they make is comparable.
My experience with high-assurance software is that it's similarly well constructed and engineered to high-assurance civil engineering, eg, major bridges. Failures happen, but are sporadic rather than regular.
"My experience with high-assurance software is that it's similarly well constructed and engineered to high-assurance civil engineering, eg, major bridges. Failures happen, but are sporadic rather than regular."
Where did you experience high-assurance software and is there a report on how it was made? There's very few here that even know what that means although I've been working to change that over the past year or so. Always collecting more from that tiny field for any lessons learned that could be shared.
I worked in control firmware and systems middleware/OSes for chemical processing equipment (and related control systems). None of the super fancy, huge plants; more single room sized processing pipelines for R&D uses. That said, the bigger chambers were like 0.2m^3 and operated at 2500PSI @ 100C, so you'd definitely know if one catastrophically failed.
We didn't necessarily develop a lot of process in-house, because our senior engineers had backgrounds with Boeing and/or NASA, which both had extensive ideas about how to design reliable systems.
If I were summarizing, there's only really two points that cover about 90% of what you need to do for high assurance software:
1. Realize almost all bugs are because of politics and economics, not technological or engineering faults per se. That is, we make choices about how we set up our culture and corporate system which incentivizes people to create and hide bugs, while also failing to incentivize others to help fix that bug. The first step in combating bugs must be to change the fundamental incentives which create them. In particular, a focus on the success or failure of a team as a whole. Development is a communal activity, and the entire team either succeeds or fails as a unit. Someone else committed a bug (and it got merged all the way to deployment!) that brought down the system? It's because you didn't provide the necessary support, teamwork, and help with engineering your peer needed to succeed. What can you do to help them succeed next time? After all, everyone is human and makes mistakes. What's important is that the people are interlocked in layers, where one person can catch another's mistake (without punishing the person who made a mistake, because that just incentivizes them to hide them!) and help fix it before the code reaches client machines. Successes may be individual, but failure is always the system's fault, never an individuals.
2. Almost all technical bugs, in any field, are because of leaky abstractions and implicit assumptions. From abstract ones like mathematics to physical ones like carpentry. Be explicit. About every possible detail. And if you think you're being too explicit, you probably forgot 80% of the cases. Ever see the average house blueprint? Puts software engineering design to shame, easily. If you want to build something that runs as reliably as a highly tuned, expertly designed engine, you can't start with anything less than as detailed of a specification as they use. Be explicit.
Once you get to the point where you're working in genuine engineering teams rather than as individual engineers on a team and you have explicit, detailed specifications, the technologies to actually convert that reliably in to software that runs stably are pretty straight-forward.
The reason we don't see this all the time is simply that it's expensive: the politics require a lot of redundancy of time spent (eg, code has to be read several times by several people); the explicitness requires a lot more upfront planning and effort in documentation, which requires more time invested per unit of actual coding; etc.
Of course, much like we have building codes for houses and larger structures, I think it's perfectly fair to expect minimal standards from software engineers. (Especially now that, eg, IoT botnets are DoSing things.)
Appreciate your write-up. It all sounds great. I'd say the tech to go from detailed specs to reliable or secure systems isn't necessarily straight-forward except in the easy cases. It can take some serious work by smart people, esp if it's formal verification or spotting unknowns. We can easily get 90% of the way there, though, without the hard stuff just by people giving a shit and doing stuff like you said.
Of course, Cleanroom Software Engineering that does a lot of what you said without as much formal stuff as Praxis's method. Thing you should remember in these conversations is to point out to other party that what you recommend and what these methods did knocks out tons of debugging and maintenance costs. Since both phases cost labor, with maintenance fixes multiples of development cost, there were many cases where Cleanroom projects actually cost less since they knocked out huge issues early on. Can't bank on that being normal with it often being 20-30% or so more. It was around the same or less cost in many case studies, though, due to knocking out problems earlier in development.
Btw, I found a nice write-up on Cleanroom without tons of math or formality with case studies on college students if you're interested in references like that:
I like dropping them on people in regular development in these discussions when they say you can't engineer software, it takes ridiculous skill, or it would cost what NASA's team spends (rolls eyes). Eye opening experience for some.
> I'd say the tech to go from detailed specs to reliable or secure systems isn't necessarily straight-forward except in the easy cases. It can take some serious work by smart people, esp if it's formal verification or spotting unknowns.
I was being a little facetious. Of course the technical work is highly complex -- some of the brightest minds of our time work on DARPA related projects on foundational mathematics related to automated theorem proving. (Looking at you HoTT crew and related projects!) Why is DARPA funding automated theorem proving? Because they want to create high assurance software for government infrastructure to counter the threat of AI-based cyberwarfare, and our current mathematical techniques aren't up to muster.
However, we've made tremendous progress on that problem in the span of around 100 years. By contrast, the problem of how to not incentivize your workers to do a shitty job, causing you problems later on... has been with us, I think it's safe to say, for thousands of years. (And it too, has attracted some of the brightest minds over the years.)
So in a relative way, the technical aspects are "straight-forward" to address, compared to the underlying political problem. And the more technical methods only add some assurance that you've correctly implemented the spec as defined, not that you're doing the right thing. It's certainly good to, eg, check that you're using total functions or not misassigning memory, but it's not magic. So while they catch a lot of dangerous problems, they can also lead to false confidence about the existence of other classes of problems.
> they say you can't engineer software, it takes ridiculous skill, or it would cost what NASA's team spends
It mostly just requires that we consistently act with discipline and professionalism, which are both tiring compared to not doing them, so by and large, people just don't bother. I know I don't when I can get away with not doing it (even if I know that's a bad habit).
"Imagine workers coming to the job one morning to find that the tools they used the day before have now en-masse been declared obsolete and all their technology is slightly different, incompatible and un-tested."
This is one reason software is eating the world. It's actually possible to build new tools to increase productivity and start using them in a very short period of time. Sure, most of the tools are crap or marginal improvements at best. But with so many new ones being developed constantly, occasionally one offering real benefits shows up and every developer can benefit almost immediately.
I also think you vastly overstate the amount of "NIH" and vastly understate the massive amount of actual reuse.
How many developers use Apple's UIKit to build apps? Yes, a small number reject some out of the box components, but I'm sure many more developers just use what Apple provides as the building blocks of their application. Very few people write their own networking stack or HTTP library. And as much as newer programming languages get massive hype, the same languages tend to dominate the Tiobe rankings year after year (or whatever popularity metric you prefer).
In other words, focusing on what gets posted to Hacker News probably doesn't reflect the experience of the median developer.
How is the number of web frameworks any different from the number of possible building facades? Even if you decide you want a glass facade, there are dozens of companies that can build that for you, and they've all re-invented essentially the same thing.
The same can be said for nearly everything else. Insulation systems, HVAC, lighting, interlocking brick, drainage...
But, you would expect your house to stand upright, all the fixtures should work (and work in an expected way), the roof won't leak, your house should be solid and so on. And if it is not you'd expect the producer to put a warranty on their product.
Choices made are made for aesthetic or engineering reasons or cost or some other constraint, rarely because the tech is 'fancy', 'new' or has been invented by the crew building your house. Bricks will be laid on plumb walls, the roof will be strong enough to support the expected load and so on and if any of it fails you will be surprised.
Note that most of this is stuff that has already been engineered elsewhere that is is combined in some novel way, and for the most part all of the materials can be used together, are standardized as much as possible and your average build crew will be able to move on to the next job without having to re-learn their whole knowledge base because what they did last week is now so '2015' that they are essentially useless unless they get with the times.
Not all software developers consider something from 2015 'old' and immediately jump to the 'fancy new' stuff that came out this week. See the discussion on "Happiness is a Boring Stack"[0] from yesterday.
There are crappy new building materials that don't work (which you often don't figure out until years later when you have to replace your roof or windows or siding or doors), and there are bad crews that do crappy jobs.
This is really no different from software, though I'd argue it's easier for the crappy developers to continue working. This partly has to do with your website having ugly-but-working code has little impact to your typically business owner, while having bricks that are crooked and unevenly spaced -- even though they're perfectly functional -- is highly visible to everyone.
The bricklayer that does a functional-but-ugly job gets a bad reputation immediately. The website developer that does the same thing isn't found out until much later when you have to modify the code.
> This is really no different from software, though I'd argue it's easier for the crappy developers to continue working. This partly has to do with your website having ugly-but-working code has little impact to your typically business owner, while having bricks that are crooked and unevenly spaced -- even though they're perfectly functional -- is highly visible to everyone.
But in the wonderful world of e-commerce those unevenly spaced bricks are roughly the equivalent of the gap a skilled hackers needs to enter your store or to raid your db.
Also, it is probably important to make the distinction between engineers and contractors, engineers design stuff, contractors execute the designs.
In the software world we used to have these people called systems analysts, they would be roughly the equivalent of engineers, whereas the programmers were more comparable to bricklayers.
Then for a while we had 'analysts-programmers' and now the whole analyst bit has disappeared. This is a pity because I think it was a very valuable role and worthy of being an independent discipline because I believe that there were people that were good at these different aspects of the work but rarely really good at both.
In software, programmers are the engineers, and the compiler is the contractor. The software equivalent of brick laying was automated away long, long time ago.
A compiler is merely a powertool, not the contractor, anything it does you could do by hand but slower. We've tried (several times) to create the software equivalent of brick laying aka 4th generation languages but to date they have all failed to attract mainstream attention, mostly because they simply don't work well:
> A compiler is merely a powertool, not the contractor, anything it does you could do by hand but slower.
Only in a sense in which you could do all the things a contractor does yourself, but slower. In both cases, you'd need to first learn what the compiler/contractor is doing. If anything, the compiler is a powertool that automates away the contractor completely.
I dislike comparisons of software to construction and civil engineering anyway. The two seem nothing like software. They have nowhere near enough (literally) moving parts to reflect the way software works. The closest thing to a comparable discipline that comes to my mind would be designing and building jet engines.
My title is actually "Senior Programmer/Analyst". But, I think your observation is correct that it's not as common to have a title like mine anymore.
It's strictly due the historical development of our field. Computer systems were limited to large organizations in the early days. VMS, Unix, Windows, and less expensive hardware allowed smaller, less well-funded organizations to jump into the game.
In internal IT departments, software developers have become the "jack-of-all trades, master of none" people. We fulfill analyst, developer, operations person, and support with little to no training in any area. IMO, part of the reason software systems are so fragile nowadays is this "generalist" mentality. Mind you there are plenty of other factors that are, IMO, mostly social.
All that said, there is a tendency to think of software development like assembly line manufacturing. There was a Dyson vacuum ad that I loved. The supposed owner of the company showed the vacuum in operation and then talked
about the 239 (not sure of the exact number) versions that came before it. Once the company got it right on the 240th model, mass-assembling the vacuums was quick. Building a model, letting users work with it, integrating their feedback into a new model, this embodies software development. And it's expensive along at least one of two dimensions: money or time.
In all fairness, if someone throws a brick through your window you'll have police at the scene in relatively short order and most buildings don't need to worry about their doors being proofed against C4. On the other hand, if your software project gets hacked or DDOS'd you have no recourse unless they're incredulously sloppy or you're a mixture of wealthy and influential.
I do think software engineering as a profession should take notes from other engineering disciplines - but to compare them apples to apples is a touch unfair when our discipline is much newer, routinely attacked by bad actors (fer teh lulz, no less), and we're regularly updating our toolset to accommodate for changes in a much more rapidly-evolving landscape.
So it is extremely unlikely that civil engineering will be done twice for an exactly identical project. But no civil engineer is going to make his own table for bolt tensile or sheer strength just to scratch an itch.
Then we come to software, in software the same problems appear over and over again and we have in the 100's of possible solutions for some problems that are all almost (but not quite, of course due to sloppy design and lack of standardization) interchangeable.
Think web frameworks. They are themselves an attempt to abstract away some elements that are common, but in the meantime there are almost as many webframeworks as there are things that they were trying to abstract away to begin with (and all of them fail on one or more of the details). Or programming languages, another area in which we have re-invented the wheel so many times that the number of programming languages (1000+) will at some point probably exceed the number of human languages (about 6500 in active use, but about 2000 of those have < 1000 speakers).
http://www.infoplease.com/askeds/many-spoken-languages.html
http://www.99-bottles-of-beer.net/
reply