Programmers are still humans at this point. Humans make mistakes. It is helpful to have software executing on computers prevent us from making mistakes rather than making it easier for us to make mistakes.
Does anyone who has worked in software for any period of time not realize this? Computers are dumb they (generally) repeat the same tasks reliably _compared to humans_. Occasionally there are system failures but most failure are caused by code or configuration changes, especially when they interact with inputs and systems in untested ways.
You’re missing the point. Of course there are (enormous, life-ruining) mistakes, but that’s not it.
With software, generally the only people with the means to demonstrate the software is flawed are the people in control of the software and associated data.
How about just engineering stuff to not have errors in the first place.
My toaster is a complex bit of engineering - it has thousands of parts which all work together to take power from the wall to make toast.
Yet it has no errors. It just does the job I ask it to do.
A computer on the other hand seems to have a lot of ways to fail, and does so nearly every day. I suspect everyone reading this comment has seen at least one error today. Can't we engineers make the software better so that these errors can't/don't happen?
People make mistakes. Every piece of software ever written has bugs: I don't think that means everyone's process is rotten. It just means they are human.
Hm, then is your point that many developers become careless while programming because they are in an environment with safeguards in place to avoid screw ups?
Not only developers, but I think most people using computers.
It feels like people have a tunnel vision focusing only on the parts of the screen they're accostumed to. The other parts are ignored.
They try something, fail, try the same thing again, fail again, and repeat until they either give up or decide to slow down and explore the corners of the screen (which often leads to them finding the error).
I wonder if this is somehow related to the "magical halo" of technology. First because they try the same thing expecting different results, second because they don't explore the unknown parts of the system (maybe for fear they'll make things worse?).
The article mentions another Coding Horror one at the bottom [0]. I found this tidbit from there interesting:
"In Code Complete, Steve McConnell cited two studies that proved it:
"A pair of studies performed [in 1973 and 1984] found that, of total errors reported, roughly 95% are caused by programmers, 2% by systems software (the compiler and the operating system), 2% by some other software, and 1% by the hardware. Systems software and development tools are used by many more people today than they were in the 1970s and 1980s, and so my best guess is that, today, an even higher percentage of errors are the programmers' fault."
I'd really like to learn how to track down hardware bugs, or mess with things like Christopher Domas does.
You are describing the exact same experience people who arent programmers have with software. It usually working and being almost always problems with the user.
It also leads to the same horrible misjudgements as in the article mentioned above. If you dont know what to look for you will come to erroneous convictions by ruling out the error cases you do know.
That means we need to take a very close look at whether "Real programmers" are actually anyone to emulate.
Programming culture has almost football field level of "No time for weakness" attitude.
If I see a possible failure mode of a system, and bring it up, someone's going to tell me to stop being a clicky click windows idiot and learn to be careful.
Trying to prevent human error in software isn't seen as a priority so nobody does it. They are concerned with the most reliable code rather than the most reli4 code-user-hardware-task-schedule-conditions system.
Programmers need to accept software fixes for human and hardware failures. It's a lot easier to add a confirmation dialog than it is to somehow become 100% reliable at not clicking the wrong thing.
The worst example perhaps. I have the unfortunate honor to work on our python projects from time to time, but rarely and every time that I do, something is broken. No other software is as unreliable. Only Ruby comes close and probably for the same reason.
To be fair, not every programmer faces "Oh noes I will lose a cat picture" as their maximum possible downside when coding.
I worked in a town where programming errors could easily causes crashes. There was talk of that actually happening once, although on investigation that probably wasn't actually true. When I say "crashes", the hardware at issue would routinely way several tons and be moving at 60+ miles an hour.
I do not mean to cast any aspersions on the huge levels of responsibility and mental stress involved in reviewing purchasing contracts.
reply