Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>'Safe' programming languages will improve security, presumably at the cost of usability through decreased performance. A bit like how a Ferrari is faster than a Volvo, but not as safe.

This is fundamentally untrue. Safeness of the kind described in the article does not come at the cost of performance -- all type checks, invariant conditions and general formal proofs of correctness are determined at compile time. The produced code is to be indistinguishable from a correctly written, efficient C program. You are allowed to play as much as you like with direct memory accesses and use all the dirty tricks you like as longs as you can prove that the resulting program is formally correct.

What could be argued is that price you pay for all this is the difficulty of writing such programs. But definitely not their performance.



sort by: page size:

> Nope, you can very much make them safer. It might not make them the "safest" it can be, but it can totally make them safer.

This one might come down to conflicting definitions of "safe". The author seems to define safety purely in relation to the number of language features with which you can contrive to shoot yourself in the foot. You seem to also take into account the number of language features you can use to make it harder to shoot yourself in the foot.

I think there's some value in both definitions. The author's perspective captures a lot of what worries me about languages like C++. But it also fails to give C++ any credit for all the language improvements that have materially improved the overall quality of most C++ code over the past couple decades.


That's not enough to refute derekp's point, though. Let's admit that C is unsafe, and that some (not all) other languages are safer. But, first, safety is not the only virtue. And second, you don't get safety just by using a safe language.

To the first point: Safety is good. Really. For security-critical software, safety is very, very, important. It's still not the only thing that's important. Familiarity to the developer still matters. Tool support still matters.

For example, I've got nearly 30 years of experience with C/C++. I can avoid (most of) the sharp edges. Could I write code that somebody could still pwn? Probably. Could a static analyzer, say, save me? Maybe. But let's say I wrote the same code in Haskell. Could I write at least something that could DOS the user's machine? Easily. Would I have any idea whether I was doing so? No. Could a static analyzer save me?

Second, I remember when Microsoft added the ability for Outlook to execute attachments. That was going to be a gaping security hole, no matter what language it was written in. The security hole was in the spec, not in implementing language. And if I understand derekp's point, you could have a safety hole in the library. For example, Java is safer than C at string handling... unless there was a flaw in Java's String library. Are you sure that there isn't one? How sure are you? Why are you that sure?

For that matter, did Haskell have an SSL package? Was that package anything besides a wrapper around a C library?

The higher level stuff you use, the more layers there are below you, and the greater the attack surface that you can't even see.

Now, you could argue that I'm more likely to write a security bug doing string handling by hand, than there is to be one in Java's String library. And I'd agree that it's probably so. But I need everything I use to be bulletproof, and an attacker only needs to find one flaw. The more I use, the less the odds are in my favor. (The more I write myself, also the less the odds are in my favor.) There is a place where writing it yourself, even in C, is the right answer, even for security critical code. But once you do, audit it heavily, analyze it with as many tools as you can, have whitehats take a serious run at it, and so on.


Your analogy suggests that "safer languages" are like crappier versions of C. I don't think that assertion is well supported.

> Just like any car can crash. However some cars are more dangerous than other cars, just as some programming languages are more likely to produce insecure code than other programming languages.

> and the alternatives are designed from the ground up with security in mind

The RMS Titanic was billed as one of the safest ships on the sea -- yet due to poorly implemented protocols and practices, negligent leadership, and disregard for best practices, it resulted in one of the most catastrophic maritime disasters.

Using the most "secure" programming language in the world, one can still design very insecure code. Conversely, using the most "insecure" programming language in the world, one can still design very secure code. This would boil down to the skill of the engineers, competence of leadership and adherence to best practices.


Sure, you can do dangerous stuff in any language, but it's much harder to write a secure C program than a secure Python or Haskell program.

There will always be a compromise between cost, performance, features, usability and security.

'Safe' programming languages will improve security, presumably at the cost of usability through decreased performance. A bit like how a Ferrari is faster than a Volvo, but not as safe.

Performance has been a major driver in the choices made so far, I'm sure that the heartbleed affair will move the needle towards the 'security' end of the spectrum but I doubt it will move enough to drop C as the main work horse of systems software coding.

Reducing the complexity of the protocols used would seem to me to be a better place to reduce the exposed attack surface. No matter what the language used if you make a system extremely complex bugs are going to be more numerous and due to the interactions between the various parts much harder to detect.


> many other compiled language will do with easier coding

At the expense of safety.


> If runtime safety is turned off, you get undefined behavior

You already know someone will teach their students to always have it off because its "slower" or something.

Make something idiot-proof and the world invents a better idiot.

Another language that does safety like this incredibly well is Ada (Ada/SPARK), and I'm unsure why people aren't more hyped about it. So many people hype Rust or Zig or whatever new language, and yet there's an incredibly safe and expressive language that's used in high-reliability applications like missile guidance systems, that nobody seems to talk about.


> Safety is not a binary value

Exactly. Encoding logic in types doesn't have to fix all problems all the time. It's sufficient if advantages outweight the costs.


I'm not sure why you are being downvoted, as it is still an open question whether or not safer languages make it easier to write safer programs. So your comment may be a valid one.

But languages like C / C++ don't claim to be safe.

> Except there are other safer languages to choose from, like Pascal and Basic compilers.

I'm curious, why do you say "safer"? These are languages for microcontroller programming. The things you do there are bound to be "unsafe", like peeking and poking memory for memory mapped i/o and disabling/enabling interrupts.

Unless, of course, all the possible things (i/o, timers, interrupts, etc) are wrapped in some kind of "safe" api so you essentially don't have access to low level facilities any more. The Arduino programming environment is kinda like this but you can still cause bad things to happen and if all else fails, hang the device with an infinite loop.

Is there any backing for such claims of "safety"?


I might be wrong but it seems that the new safer languages don't run as fast as C and create larger binaries. But that's OK if safety is valued more than speed and size.

It's the exact opposite of "you cannot make language X safer", only if you define that as "given enough time and resources, you could make a proof that your program is correct".

But that's not a safe language; that's a safe work. ANY programming work - regardless of language - can be made safe via such expensive means, which puts this square into the space of "argument ad absurdum". You could argue that brainfuck or assembler can be made safe via these means, but everyone would rightly ridicule you for it.


More generally, I don't understand this argument. Assuming you can trust the C compiler (big if, but at least some validated (large subset of) C compilers exist; see CompCert), I don't get why this would be worse then generating machine code in a safe language.

So why is `unsafe` even in the language, then? It would seem to me that it simultaneously undermines safety guarantees, and it can't even ensure performance improvements.

The author raises an important issue here: many people are lead to believe that using C is inherently unsafe. That's not true, and many of the most secure systems in the world were written in C. The other direction also doesn't work: software written in languages like Java can be effectively unsafe.

Safe languages have restrictions like immutability, garbage collection, or "one mutator at a time" (to ensure temporal safety) outside of deliberately-awkward escape hatches (some unsafe). These tradeoffs are often workable, but some applications require no GC pauses, or placement initialization and non-allocating intrusive linked lists, which can make languages with ergonomic manual memory management a better choice for those use cases. "Safe languages are better than having to write correct code" is a thought-terminating cliche that dismisses the complex tradeoffs and limitations of safe languages.

> C and C++ code CAN be secure

Anything can be secure (and conversely anything can be insecure). The theoretical potential doesn't matter because real life is never the theoretical best case. What matters is the overall risk (is liklihood * how bad < benefit?)

next

Legal | privacy