Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> If you justify soundness for a property because that property is at the root of 90% of bugs then you should still prefer another technique that reduces 90% of bugs more cheaply. It doesn't matter what technique is used to reduce that amount of bugs.

This is a false equivalence, as not all bugs are the same and so cannot be trivially equated in this fashion. Memory safety violations permit classes of bugs of significantly higher severity than other types of bugs. Your off-hand disclaimer of "the same or different ones of equal severity" doesn't mean anything, because you cannot predict the severity or scope of even trivial memory unsoundness, so your heuristic arguments of preferring lower cost methods simply don't work. A memory safety bug can range from something as simple as "a process crashed" in the best case, to "we lost control of all our servers worldwide and all of our customer data was compromised".

So I would summarize my dispute with your position as follows:

1. Memory safety is NOT an underwhelming property. Arguably, all other properties of your program derive from memory safety.

2. If you have to use an unsafe language that permits violating memory safety, say for performance reasons, and you have any kind of attack surface, then ensuring memory safety is important. Probably more important than most other properties you otherwise think are important due to #1.

3. Heuristic approaches to memory safety do not necessarily entail a reduction in the severity of bugs and/or security vulnerabilities. Arguments that you eliminated 80% of trivial memory unsoundness is simply not compelling if it leaves all of the subtle, more dangerous ones in place.

4. The long history of security vulnerabilities is a main driver in the interest in soundness among researchers, contra your claims that that this obsession has no association with real-world needs.

I think I'll just leave it at that.



view as:

> This is a false equivalence, as not all bugs are the same and so cannot be trivially equated in this fashion.

Correct, and I tried hard not to make that equivalence.

> Memory safety violations permit classes of bugs of significantly higher severity than other types of bugs.

This is not known to be true after eliminating 95% of memory safety violations.

> Your off-hand disclaimer of "the same or different ones of equal severity" doesn't mean anything, because you cannot predict the severity or scope of even trivial memory unsoundness, so your heuristic arguments of preferring lower cost methods simply don't work.

Ah! Now that is 100% true, but it cuts both ways. Of course it's a problem that we can't know in advance what would be a severe bug or it's probability, but saying let's shift resources into sound methods in areas where they happen to work implicitly makes such guess. It's alright to acknowledge that measuring the effectiveness is hard, but that also means we can't presuppose the effectiveness of something just because it has an easy to understand mathematical property. I'm not arguing for or against sound methods; I'm saying that since the question is empirical, it must be answered by empirical means as that is the only way to answer it.

A pragmatic argument may be that you'd rather risk overspending on correctness than underspending, which essentially means: spend whatever you can on soundness whenever you can have it. The problem is that the world just doesn't work this way, because budgets are limited, and any work that is done to increase confidence in one property must, necessarily, come at the expense of other work that may also have value. There is simply no escape from measuring the cost/benefit of an economic activity that has a non-zero cost.

If I fly on a plane I want to know that the company doing the avionics spent its assurance budget in the way that eliminated most bugs, not that it spent it on ensuring memory safety (when I was working on avionics, we didn't use deductive methods even though they were available at the time precisely because their ROI was low and that's bad for correctness where you want every dollar to do the most good).

> Memory safety is NOT an underwhelming property. Arguably, all other properties of your program derive from memory safety.

Except this is the starting point for most software. If your goal is to get to where most software is already is -- I would very much characterise it as underwhelming.

> If you have to use an unsafe language that permits violating memory safety, say for performance reasons, and you have any kind of attack surface, then ensuring memory safety is important. Probably more important than most other properties you otherwise think are important due to #1.

Yes, but how much should you invest in getting from 0 safety to 90% vs how much from 90% to 100%?

> Arguments that you eliminated 80% of trivial memory unsoundness is simply not compelling if it leaves all of the subtle, more dangerous ones in place.

I agree, but again -- it cuts both ways. If you invest a significant amount of effort in eliminating the last 5% of memory safety violations you cannot claim that it's more effective than spending your effort elsewhere. All I'm saying is that these are empirical questions and the fact that whether something is sound or not has a clear yes/no answer doesn't really help tackle the empirical problem.

> The long history of security vulnerabilities is a main driver in the interest in soundness among researchers, contra your claims that that this obsession has no association with real-world needs.

This is clearly not true, because the focus on soundness has only decreased over the past 50 years and continues to decrease. In the 70s it was soundness is the only way. Now there's more research into unsound methods.


> I'm not arguing for or against sound methods; I'm saying that since the question is empirical, it must be answered by empirical means as that is the only way to answer it.

And I say that the history of CVEs empirically shows that any important software with an attack surface must ensure memory safety, because no heuristic approaches can be relied upon.

> It's alright to acknowledge that measuring the effectiveness is hard, but that also means we can't presuppose the effectiveness of something just because it has an easy to understand mathematical property.

Except you can measure the effectiveness of memory safety in preventing vulnerabilities in domains with an exposed attack surface. Your argument just reduces to an overall focus on bug count, despite acknowledging that 1) most security vulnerabilities are memory safety related, and 2) that memory safety bugs and other types of bugs can't be equated; and your only "escape hatch" is a supposition that a heuristic approach that "eliminates 95% of memory safety violations" probably doesn't leave anything serious behind. Sorry, 40+ years of CVEs does not make this claim reassuring.

> Except this is the starting point for most software. If your goal is to get to where most software is already is -- I would very much characterise it as underwhelming.

It's not the starting point of software that has high performance or low power requirements, or expensive hardware, which is the context in which I took issue with your statement in my first reply. That's why memory safety is not underwhelming in this context.

> Now there's more research into unsound methods.

There's more research into those methods because security costs are still a negative externality, and so a cost reduction focus doesn't factor it into account.

Anyway, I feel we're just circling here.


> And I say that the history of CVEs empirically shows that any important software with an attack surface must ensure memory safety, because no heuristic approaches can be relied upon.

That no heuristic approaches can be relied upon to eliminate all memory safety violations -- something that is obviously true -- does not mean that eliminating all memory safety is always worth it. If it is the cause of 80% of security attacks, reducing it by 90% will make it a rather small cause compared to others, which means that any further reduction is a diminishing return.

> Except you can measure the effectiveness of memory safety in preventing vulnerabilities in domains with an exposed attack surface. .. and your only "escape hatch" is a supposition that a heuristic approach that "eliminates 95% of memory safety violations" probably doesn't leave anything serious behind. Sorry, 40+ years of CVEs does not make this claim reassuring.

Sorry, this doesn't make any logical sense to me. If X is important because it is a cause of a high number of bugs then reducing the number of bugs caused by X necessarily reduces its remaining importance. You cannot at once claim that something is important because it's the cause of many bugs and then claim that what's important isn't reducing the number of bugs. The number of bugs can't be both relevant and irrelevant.

> Anyway, I feel we're just circling here.

Yep.


> Sorry, this doesn't make any logical sense to me. If X is important because it is a cause of a high number of bugs

It doesn't make sense to you because you keep focusing on the number of bugs, and I keep talking about bug severity. As I've already explained, memory safety bugs are worse than other bugs. The number of bugs is generally not as important as severity, and doesn't typically impact how useful software is past some threshold for bug count. The severity of the bugs is important, always, eg. across the set of programs with 1 bug, the subset of programs where that bug leads to memory unsoundness will be considerably worse than the rest.


> It doesn't make sense to you because you keep focusing on the number of bugs, and I keep talking about bug severity

No, I'm talking about number of bugs weighed by severity.

> As I've already explained, memory safety bugs are worse than other bugs.

Perhaps, but first, the post doesn't talk about memory safety but about deeper properties -- that's the more expensive kind of proof -- and second (since we've started talking about memory safety, which was only something I mentioned in passing as its completely tangential to this subject) it is not clear just by how much memory safety bugs are worse. A $5 note is, indeed, worth much more than a $1 note, but you still wouldn't pay $50 for it.

Obviously, these things are hard to quantify precisely, but it's important to price things at least somewhat reasonably. At the end of the day, a memory safety violation results in some functional bugs/security vulnerabilities -- which form the actual loss in value -- and is worth their total but not more.

When MS said that memory safety violations cause 70% of security vulnerabilities, they meant that the total worth -- as far as security goes -- of memory safety is 70%, which is the same as any other 70% regardless of cause; i.e. that's the value after factoring in the "impact multiplier", not before.

For example, you can have 10 memory safety bugs that cause 70 severe vulnerabilities and 30 other bugs causing 30 more severe vulnerabilities. Each of the first ten is 7 times worse than each of the other 30, but eliminating only 8 of the first ten and half of the other 30 is a little more valuable than eliminating all of the first ten and only one of the remaining 30.


Legal | privacy