Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I'd be more sympathetic to this argument, by a lot, if I wasn't constantly encountering bugs in systems where there wasn't (or shouldn't have been) any guesswork needed.

To (substantially) rephrase the point, you're saying "there's this additional source of bugs" without an argument that it's a significant enough source that it will stand out against the background of all the other sources of bugs.

There's also a strange flip side where abnormally competent people are doing the work, so I might even believe the bug-rate is likely lower than average.



sort by: page size:

Bug-finding is contributing.

True, this is where I rely on my experience. With such high number of maybe even benign bugs, this shows a lack of care that, IMO, results in serious bugs elsewhere.

My argument does not advocate the neglection of these issues in any way. It's simply that such bugs are the low-hanging fruit and are far less frequent than much subtler issues which can have the same adverse effect.

If you're operating your software team as if bugs in production cost 50x-200x more than bugs caught during development, you're probably not operating your software team with a firm anchoring in reality, considering how thin the actual evidence is to support this notion.

Again, this depends very much on what type of software you're working on. I've worked on systems where you probably need at least one extra zero, because the cost of pulling some equipment out of service once deployed could be devastating, and it wasn't the kind of equipment where you just have a couple of hot spares available in case something fails. I expect the people working on systems where really serious harm could result from a bug, the kind that can't be reversed because someone got hurt or something got physically destroyed, probably have much better war stories than me.

That said, I agree that development processes today are often very different to the ones we mostly followed a few decades ago, and I agree that dubious supporting data and quasi-statistical charts should be challenged. I haven't come across that particular chart before that I recall, and assuming it's a faithful reflection of the original source material, I'm slightly surprised that a well-known researcher like Capers Jones was behind it. It's definitely not the only argument for the basic idea that fixing bugs later can be much more expensive.


I agree this is a key point. Software bug rates are not comparable to scurvy incidence rates, and developer productivity is not similar to the number of bricks a laborer can move in an hour.

I see it more as saying that more bugs can be found with OSS

Well the conclusion is not surprising. Bugs aren't found because many people "read" the source code, but many people of many different skills use it and therefore every problem that is hard to you will some day find a person that has the specific domain knowledge to fix it much more easily.

Also I'm a little surprised at the size of the dataset and the choices, given that open source probably fixes an easier bug faster than a higher priority one.


Yeah, and I'd also add that the total # of bugs in an application will always be greater than the total # of 'known' bugs. Tracking down and fixing the oddball bugs usually prevents a larger set of related issues from popping up later.

You're arguing with a straw man then. Their argument was that only solving bugs is an issue, because you should be taking preventative measures so they're occurring less frequently.

It's not a value judgement, just an observation. Clearly there are successful bug trackers.

Especially since we see the government in need of debugging, but we don't see the cause of the bugs in the source code...

One possible source of tasks for this process is the bug backlog from open source projects.

We didn't have a bug free code base, but it was rare that someone found an actual bug. As I remember it, it happened maybe once or twice a month in an 8 person team.

At other places I've worked, I wouldn't raise an eyebrow if I found 3 bugs in a day, just trying to get other stuff done.


Generally if there's not a huge organization putting their reputation(and $$$) on the line there is going to be bugs.

This argument applies to any hard problem, so it doesn't seem valid. Whether there's an important bug in a project depends on someone's skill and on how much time they've dedicated to it, and it's hard to know how skilled or dedicated someone is.


I think the concern is less about bugs and more about productivity.

I don't believe the emergence of a single bug tarnishes an entire codebase and labels it as poor quality. This situation seems like a lapse of judgement in process, which they've fessed to and provided a path to correction for.

Not everybody is convinced that it's "more productive" to produce more bugs.

Years back, I spent some time doing QA for an operating system, and I noticed that there was a strong tendency in people to deny the possibility that the operating system had a bug when they noticed something wrong and instead put all their energy into fixing test automation, finding workarounds, etc. And a lot of these were people whose sole purpose at work was finding and reporting bugs in the OS.

Yes, it's much better when processes limp on with bugs that are difficult to diagnose and often exploitable.
next

Legal | privacy