Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think you slightly missed the point there. The video is of a bridge designed to span a highway that was to be expanded from 2x4 to 2x5 lanes + an emergency lane on both sides with minimal interruption to existing road and rail traffic.

The decision was made to construct the bridge off to the side of the road on a separate spot, then to create a special purpose roadway across the road to the location where the bridge would be placed. And finally, to move the bridge from the construction site to the final destination in one piece.

The whole thing was conceived and executed according to plan, including a < 24 hour closure of the road and one week of interruption on the rail line (it was not possible for many reasons to have the track installed prior to moving the bridge).

That is engineering. In the 'move fast and break stuff' world that would be 'move fast and kill people'.

If this were software we'd be looking at a multi-year software project with a few hundred programmers delivered on-time, within the budget and working flawlessly on the day of delivery.

I have yet to see such a project. But as you say, 'Amazing!', the fact that you make it seem like this is 'no big deal' is exactly what is so good about it, you fully expected it to work didn't you?

The trick apparently is to make the complex and impressive stuff look so boring, I really wished that we could make software look that boring.



view as:

NASA does that regularly. They're seemingly the only people who can afford it.

If bridges had scope creep the way software does, every bridge would start out 1 lane each way and be 6 lanes double decker in the middle before all lanes fly in different directions towards different cities, much less the other side of the river.

I hear what you're saying about using tried-and-true methods. But nothing else has scope creep like software does, because basically nothing else is design-only. That's why software is so crazy. It's ALL design. There's no manufacturing at any point. You can manufacture infinite copies for almost free, i.e. compile and copy and install.

If you could go from bridge design in CAD to actual working bridge in meatspace in ~30 seconds and for $0.01 then yeah, there'd be millions of horrible, horrible bridges EVERYWHERE.

The fact that the design of the bridge takes a year or two and a few million compared to the 2-5 (or 10) years and billions to actually manufacture it means that you can design, redesign, and redesign again until you get a design that'll actually work and it barely moves the needle on the total price tag.

But in software, if you redesign and it doubles the amount of time to complete the project, you just doubled the cost at least.

If it were possible to make software better, faster, cheaper by just imposing some discipline, why haven't dozens of companies done so and taken over the world?


> If you could go from bridge design in CAD to actual working bridge in meatspace in ~30 seconds and for $0.01 then yeah, there'd be millions of horrible, horrible bridges EVERYWHERE.

That's a fantastic and well made point. It's akin to the cost of communications dropping over time. When moving words around the world was expensive people tended to stick to the important stuff, but now that the cost has essentially dropped to 0 we are drowning in irrelevant information.

> If it were possible to make software better, faster, cheaper by just imposing some discipline, why haven't dozens of companies done so and taken over the world?

Because the it likely is more than just 'some discipline' and because the market forces are working against you, after all nobody even expects software to be reliable so your competitor going to market with unreliable junk will eat your lunch if you slow down long enough to get it right, assuming you know what to build in the first place.


> assuming you know what to build in the first place

This is the heart of the problem. Nobody knows exactly what to build. Most software development is half initial coding, half bug fixing, and 90% requirements discovery.

Compiler writers actually have it pretty easy once the language is defined, which is a real honest to god spec. Comparing how long it takes (and how much it costs) to write a compiler once the spec is done would be a pretty fair comparison to how long it takes to build a bridge once the design is finalized.

And even bridges can turn into total disasters. Look at the Bay Bridge replacement in San Francisco: https://en.wikipedia.org/wiki/San_Francisco%E2%80%93Oakland_...

Or the Big Dig in Boston: https://en.wikipedia.org/wiki/Big_Dig


> Compiler writers actually have it pretty easy once the language is defined, which is a real honest to god spec.

Compilers are typically developed in-parallel with the spec, exactly because you don't know what you want to spec before you try it out.

Optimizers have it even worse - they do not have much of a spec beyond "make it fast, quickly", so all development is trying to find interesting places in existing code.


NASA is just the only people doing the full accounting of all costs. The rest are like a cook who drops a food item on the floor, says "five second rule" and hopes for the best.

"NASA does that regularly. They're seemingly the only people who can afford it."

Not by far. Actually, the woman who co-invented software engineering in their Apollo program made one for achieving similar reliability if your specs are right at around $10,000 a seat. That and Cleanroom were in production use in the 1980's making low-defect software. Many others showed up afterward with plenty of application to commercial products or significant OSS software. Here's a few.

An early one, Cleanroom, that was often as cheap as normal development due to reduced debugging:

http://infohost.nmt.edu/~al/cseet-paper.html

Margaret Hamilton of Apollo started a company later to make the one below to embody the principles they used for correctness on Apollo. The papers section is also interesting.

http://www.htius.com/Product/Product.htm

Lots of companies applied the B method for things like railway verification. Many successes that cost nowhere near what NASA spent.

http://www.methode-b.com/wp-content/uploads/sites/7/2012/08/...

Altran-Praxis does formal specs, refinement, and provably-correct (wrt specs) code with 50% premium over normal development for their high-assurance stuff.

http://www.anthonyhall.org/c_by_c_secure_system.pdf

This mentions about three, different methods being used in products or experimental projects by defense contractors:

https://www.nsa.gov/resources/everyone/digital-media-center/...

COGENT is doing seL4-style verification at a fraction of its cost with filesytem paper showing how practical it is:

https://ts.data61.csiro.au/projects/TS/cogent.pml

Some companies are straight-up using logic programming to execute precise specs of how software should work on startup budgets:

https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf


Thanks for such a thorough rebuttal! This is super useful.

I can't help but note that a lot of it was spun out of NASA though.


Welcome. JUst one came from NASA that Im aware. Others are NSA, European, US firms, and Australian.

I've seen several of those kinds of projects. I have even seen it done with lightweight process. But ( and there's always a but ) you won't do it with fresh grads pulling all nighters. You might do it with a handful of the remaining silverbacks ( who are not just crispy-fried ) left to do everything that needs done.

It really does take your entire life to learn this craft. Now try selling that today.


"f this were software we'd be looking at a multi-year software project with a few hundred programmers delivered on-time, within the budget and working flawlessly on the day of delivery. I have yet to see such a project."

I agree with a lot of what you're saying in this thread except that you keep missing high-assurance engineering in statements like this. Altran-Praxis regularly does what you describe minus "several hundred developers" since they try to keep the systems simple enough to not need that. Galois can do this. Cleanroom Software Engineering teams did this often on the first try. There's companies doing business requirements in Prolog with standardized components for plumbing. The DO-178B/C and similar companies are delivering all kinds of software that's tested from specs to code to whether object code really matches. Hamilton and Kesterel generate some systems straight from logical specs with correct-by-construction techniques while others in CompSci and industry do that by hand. iMatix had their DSL's and generators to do a significant chunk of this without formal methods. One company even specialized in high-availability conversions and migrations like you describe in the bridge example albeit I can't remember the name. Quite a few Ada projects also happened in defense sector with a bunch of subcontracted components that integrated painlessly due to good specs and language's features.

There are companies and groups straight-up engineering software that has few to no defects in production or maintenance activities. They actually have a significant number of customers, too. It's just that 99% of software isn't done that way. It has the problems you mention elsewhere in this thread. Let's be fair and give credit to those actually pulling it off, though. Also lends credibility to our claim more of that 99% could be as well.

EDIT Added specific links in another comment:

https://news.ycombinator.com/item?id=12801963


> I agree with a lot of what you're saying in this thread except that you keep missing high-assurance engineering in statements like this.

> It's just that 99% of software isn't done that way.

I'm aware of it. It's just that in my practice I do not run into companies that actually do this. The companies I look at typically exist between 6 months and 5 years, have a team with an average age of 25 to 30, maybe one or two older people with some in depth experience.

They will happily tell me that they write junk because they don't have time to do it right. Personally I think they don't have the time to do it wrong but what do I know.

Frustrating.

I've worked on re-working a fairly large project from a giant hairball to something a lot more solid over the course of two years and spent the larger amount of that time arguing about bad practices, the lessons learned were legion (for me at least) about why most software is crap.

If regular engineering were done this way everybody would be self-taught, would have about 30% of the picture, would not be willing to begin to take responsibility for their product and the majority of engineering projects would have fatal flaws in them.

We really can and really should do better than this if we are to take the responsibility that has been given to us collectively serious.


> If this were software we'd be looking at a multi-year software project with a few hundred programmers delivered on-time, within the budget and working flawlessly on the day of delivery.

> I have yet to see such a project

Then you haven't worked on safety critical systems or old school embedded/firmware projects.

The alternative is 30 developers solving the same problem with software that has a bunch of critical bugs that get fixed in the first few months of release and a few hundred more minor flaws that get fixed gradually over time.

When it comes down to it the alternative is almost always preferred by the market and for good reason.


Legal | privacy