Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think the parent author meant they trust the explicit and narrow boundaries the application code is permitted to run in.


sort by: page size:

Okay, this makes me rethink my previous statement. There are two kinds of trust here: A low level programming language should not impose restrictions on the code that prevent the programmer from doing what needs to be done, even if it looks wrong. This is how I read the 'Spirit of C" that you quoted. And certain applications would be impossible to write without it. But you need a development process to make sure that your system does exactly the right thing. So the quote should read "trust the process" rather than "trust the programmer".

> safe code is impossible

> Humans cannot write safe software. Ever. No matter what.

Formally proven code does what it says on the box? Do we have different definitions of safe perhaps?


Not really. TLDR: You cannot trust code you did not botch up yourself.

> I guess the question is where we draw the line as to what counts as reasonable.

Yes, exactly. And I think a lot of people, including me, would say that anything that can be done entirely in software is reasonable.

Hmm. Does this mean that anyone doing safety-critical embedded software should be compelled to formally verify every line of their code? I'll have to think about that. That might be going a bit too far given the present state of verification technology. On the other hand, it would be a great thing.


The justification from the article:

"To some readers, these pitfalls may seem obvious, but safety holes of this sort are remarkably common in practice. ... Proper use of this technique demands caution and care:

* All invariants must be made clear to maintainers of the trusted module...

* Every change to the trusted module must be carefully audited to ensure it does not somehow weaken the desired invariants.

* Discipline is needed to resist the temptation to add unsafe trapdoors that allow compromising the invariants if used incorrectly.

* Periodic refactoring may be needed to ensure the trusted surface area remains small...

In contrast, datatypes that are correct by construction suffer none of these problems."


> Applications that check and validate things automatically, even if it's hard and expensive.

Then you still need to be competent enough to to assess that this is the case. You can't judge code doing something correctly automatically if you don't even know what that thing is and can't do it yourself.


But I mean it's also technically correct. How could an application developer make any kind of guarantee in the face of arbitrary theming and tinkering by the end user?

Or it means constraining your domain and only accepting those programs about which the properties you care about can be proved. Or it means accepting the risk that your system might break.

And a program can generally read its source code and make fancy adhoc interpretation, sure. That’s nowhere close to specified facilities that you are guarantied to be able to use with a fair amount of trust in the resulting outcome with the regular toolbox.

I think the argument here is that code originated in the open is expected to be correct, or be corrected.

Meanwhile, code written behind closed doors may rely on security by obscurity more than it should - especially in the cases of violation detection/spam/similar cases.

I don't know that I agree with this argument, but it's plausible.


The argument seems a bit myopic. The author is talking about errors and vulnerabilities found in libraries while ignoring those same factors in bespoke code. At best that's a "security through obscurity" scheme.

I should add, the thorough guidelines quoted are for writing secure and robust programs, and imply throughout that overcommit is disabled (which it must be, to implement programs robustly).

So this actually looks really neat if it works. However, hopefully in the spirit of constructive criticism, I would be very nervous about sticking this in big letters at the top of the introduction:

> Software can literally be perfect

because that is a wonderful way to get people to invest in really robust, excellent, high-quality software - and then trust it blindly and ignore that even if everything goes well and the software is itself perfect, and the verification has no bugs, and the model that it perfectly implements actually maps the problem space correctly, it will still run on fallible hardware, interfacing with other software that is imperfect, taking direction and data from humans who can make mistakes. Now to the author's credit! Further down, under "Do you think this language will make all software perfectly secure?" and "Is logically verifying code even useful if that code relies on possibly faulty software/hardware?", this is discussed. And I think the writer actually does appreciate the limits of what this can actually do, and I very much appreciate them explaining that in what I'd call clear terms. Just... maybe don't headline with a claim like that when it has caveats and people are liable to read the claim and ignore the caveats?


Maybe the wording used was a little unnatural.

None of us write code and have no clue whether it's correct or not. We should have a certain degree of confidence.

In the example given, I think it would be more realistic to ask devs to write the function (without testing) and then note how confident you are in its correctness. I have no problem with someone who is pretty sure they have bugs and can discover and fix them. But if someone is confident they have it right and they're dead wrong... now we have a problem.


>Any uncertainty can be quickly resolved by simply running some code.

Until you have to interact with a black box of someone else's code. You can only be certain that the data you sent that particular time works, not that all possibilities of valid data work.


> It’s always some external thing: “the algorithm”, as if that doesn’t just mean “the way we programmed it”.

This is true.

> Computing systems are do not act on their own.

That doesn't mean that computer systems behave the way that we intend them to behave, or even that we really fully understand our own intent!

How much of your own code have you formally verified? (https://en.wikipedia.org/wiki/Formal_verification)

How much of your own code even has a precise enough purpose that the spec of what it's supposed to do is shorter than the length of the code? Such that you could even in theory formally verify that the implementation is in some sense "correct"?

For that matter.... how much of your code has a precise enough purpose that the spec of what it's supposed to do can be written down in formal language at all?

And actually... how much of your code has a precise enough purpose that the spec of what it's supposed to do can be written down in ENGLISH at all?

I don't think something like an "extremism filter" can ever be implemented in a bug-free way, because I don't think there's a precise enough definition of what "bug-free" would even mean.

The problem of people blaming bad outcomes on "the algorithm" is real, and organizations should take responsibility for misclassifications generated by code that they own and operate.

It's unhelpful to pretend like engineers and the organizations they work for have zero agency.

However, it's equally unhelpful to pretend like buggy behavior aligns with the intent of the engineer/organization.


That's not exactly correct. You still have to trust that the third-party code is extremely performant.

"so they shouldn't be dangerous to the market, as long as they're designed correctly". Even if we accept the author's position, why would we also make the assumption that this software is designed properly and bug free?

> If you find yourself inspecting code to decide if it is safe, you are fighting a losing battle... I'd love to hear of any success stories here though!

NaCL, seL4 and the original proof-carrying code examples of packet filter are inspecting code, but on a very rigorous level.

next

Legal | privacy