Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> If runtime safety is turned off, you get undefined behavior

You already know someone will teach their students to always have it off because its "slower" or something.

Make something idiot-proof and the world invents a better idiot.

Another language that does safety like this incredibly well is Ada (Ada/SPARK), and I'm unsure why people aren't more hyped about it. So many people hype Rust or Zig or whatever new language, and yet there's an incredibly safe and expressive language that's used in high-reliability applications like missile guidance systems, that nobody seems to talk about.



sort by: page size:

> strikes the right balance for me between productivity and safety.

Would be curious to hear what "safety" you're referring to, is it just that it has a basic static type system, (which is constantly being undermined by interface{} and reflection), or is there more to it?

Genuinely curious.


> Except there are other safer languages to choose from, like Pascal and Basic compilers.

I'm curious, why do you say "safer"? These are languages for microcontroller programming. The things you do there are bound to be "unsafe", like peeking and poking memory for memory mapped i/o and disabling/enabling interrupts.

Unless, of course, all the possible things (i/o, timers, interrupts, etc) are wrapped in some kind of "safe" api so you essentially don't have access to low level facilities any more. The Arduino programming environment is kinda like this but you can still cause bad things to happen and if all else fails, hang the device with an infinite loop.

Is there any backing for such claims of "safety"?


>It is a great language but there is very little you can do safely.

Even extremely performance-oriented and low level projects such as the standard library or the Redox kernel are less than 5% unsafe code. There's lots you can do safely.


>'Safe' programming languages will improve security, presumably at the cost of usability through decreased performance. A bit like how a Ferrari is faster than a Volvo, but not as safe.

This is fundamentally untrue. Safeness of the kind described in the article does not come at the cost of performance -- all type checks, invariant conditions and general formal proofs of correctness are determined at compile time. The produced code is to be indistinguishable from a correctly written, efficient C program. You are allowed to play as much as you like with direct memory accesses and use all the dirty tricks you like as longs as you can prove that the resulting program is formally correct.

What could be argued is that price you pay for all this is the difficulty of writing such programs. But definitely not their performance.


> Agreed, the novelty is doing it in the core language (unsafety is opt-in, not the other way around) and without sacrificing performance.

Unsafe code blocks as opt-in in systems languages appeared for the first time in 1961, followed by multiple variations of it since then, far from being a novelty by now.


> I think there's a clear delineation of whether something can be expected to crash or not based on the opt-in assumptions of the language you choose.

I can say exactly the same thing about StringBuffer vs StringBuilder, with the obvious substitution for the word "language".

The point of kodablah's post is that, if your stance is that safety should always be chosen over other practical concerns, then a natural conclusion is that you should by default be opposed to dynamic languages, as they trade type safety for programmer comfort (for lack of a better term).

EDIT: I also want to point out that "crashing" vs throwing an error is not a distinction worth making for this conversation. If your program stops executing for any reason that could be prevented by a type-checking system, then it seems reasonable to call a type-checking system a safety feature.


There was nothing interesting in what he was trying to say.

Yes, no programming language perfectly eliminates all classes of unsafety. But that's no reason to let the perfect be the enemy of the good! "The issue with safety is..." no issue at all. Being safe in a bunch of problem domains (Rust) is still strictly better than being safe in very few if any of them (C).


> Nope, you can very much make them safer. It might not make them the "safest" it can be, but it can totally make them safer.

This one might come down to conflicting definitions of "safe". The author seems to define safety purely in relation to the number of language features with which you can contrive to shoot yourself in the foot. You seem to also take into account the number of language features you can use to make it harder to shoot yourself in the foot.

I think there's some value in both definitions. The author's perspective captures a lot of what worries me about languages like C++. But it also fails to give C++ any credit for all the language improvements that have materially improved the overall quality of most C++ code over the past couple decades.


> I know the features of the language help ensure "safe" development,

Safe development? What does that mean?


> Safety is not a binary value

Exactly. Encoding logic in types doesn't have to fix all problems all the time. It's sufficient if advantages outweight the costs.


>No, you can do that, and can do so safely (if you implemented the unsafe parts correctly, at least).

That is equally true of C++; you're completely safe as long as you don't make any mistakes.


This is beyond stupid.

There is no such thing as "safe language".

The author has clearly never seen what real safety critical code look like.

When safety/robustness really is critical, there are ways to achieve it with high confidence, and it can be done in any language as long as the team doing it is qualified and applying stringent rules.

Of course this is time consuming and costly but luckily we've known how to do it for decades.

The kind of rules I am talking about are more stringent than most people who never worked on those fields realize.

In the most extreme case I've seen (aerospace code) the rules were : no dynamic memory allocation, no loop, no compiler. Yeah, pure hand-rolled assembly. Not for speed, for safety and predictability.


> many other compiled language will do with easier coding

At the expense of safety.


> Default safe, not fast, wherever possible.

That is not the mantra of C. C is designed to be fast and almost all decisions will err towards speed rather than safety.

If you want a safety first language, check out Rust.


"But I disagree that it's the responsibility of the language to keep potentially dangerous tools out of the hands of developers."

That is pretty much entirely the point of higher level programming languages.

Like preventing you from allocating and freeing memory on your own, because you might screw it up.

Or removing pointer arithmetic.

Or reducing the scope of mutability.

Or preventing access to "private" object variables.

Many programming language features are basically guards to make it less likely you cut your hand off.


> not depriving users of control when they know better than the type checker/lifetime analysis.

You're never deprived of control though, that's what unsafe is for, and it provides roughly the same level of safety as C or Zig would.


> That's what theorem provers and model driven development are for.

That's building a safe programming language on top of an unsafe one. At that point you might as well just use a safe programming language.

That won't fix logic errors, but the vast majority of security issues are safety problems, especially in this kind of codebase.


> An unreliable programming language generating unreliable programs constitutes a far greater risk to our environment and to our society than unsafe cars, toxic pesticides, or accidents at nuclear power stations. Be vigilant to reduce that risk, not to increase it. – C.A.R Hoare

No. There is no programming language that prevents the programmer from writing bogus code. Blaming software instability on the programming language would be like blaming unsafe building designs on the architect's drafting tools. Yes, shitty drafting tools can make certain kinds of mistakes easier, but the task of having to design something with careful thought and good engineering principles does not ever go away, no matter what kind of compass and ruler you're using.

Also, unsafe software IS dangerous, but putting it next to those actually life-threatening things actually undermines the message by the contrast.


"Your kid (your data) attends the same school as a bully (your program). One day, the bully threatens to punch your kid in the face (raises an exception). “But it's less wrong because he never punched your kid! He only threatened it!”"

You're going out on a limb here. I'm not even countering that example as it's rigged to prevent that. Instead, I'll point out these unsafe things don't happen in a vacuum: there's something in the language that sets the risk in motion and then there's something else that determines what happens. The compile-time safety stops it from being set in motion. The run-time safety makes things set in motion meaningless as they'll just be exceptions or alerts for admins.

Back to your analogy, it would be more like a prison learning environment where people learned in cells with bulletproof glass while one shouted he was "gonna cut someone" for not sharing answers. He can try every unsafe thing he wants but the glass makes it meaningless. The prisoner that's being targeted can literally just pretend all that unsafety doesn't even exist with no effect.

Ideally, we have prevention and detection. Reason for prevention is obvious. Reason for detection, even in a perfect language scheme, is because compiler or hardware errors (esp SEU's) will make something act badly eventually with runtime measures being last-ditch effort. If there going to be there, though, then might as well lean on them some more if there's no performance hit, eh? ;)

next

Legal | privacy