Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

What? Let me get this right, you're saying:

1. The average person being able to code is dangerous as they could "troll" or do unspecified harm,

2. So we need to arbitrarily kneecap our own tools, but that's okay because

3. These self-imposed limitations are actually easily bypassed and don't work anyways

On 1 I disagree outright, but even if I agreed, 2 is a silly solution, and even if it wasn't, 3 invalidates it anyways because if the limitations are so easily broken then fundamentally they may as well not exist, especially to the malicious users in 1. Am I misunderstanding?



sort by: page size:

> How about just writing safe code

This is a bad argument -- our tools shouldn't make the "easy path" a path that's littered with subtle bugs! It should be easy to do the right thing. See: PHP hand-coded apps that led to SQL injection often versus Python's DBAPI which makes it harder to make mistakes, Signal versus some PGP GUI, etc. People make mistakes when their tools make it easy for them to do the wrong thing.

I have the same complaint against much of the standard C library too, but for a language that the user interacts with on a daily basis, this is unacceptable.


I could not reply to your message one below so moving it here: >You're the one that has three times now commented on one of my posts trying to prove to yourself that everything you're doing is fine and there's nothing you could be doing better from a security perspective.

I am just expressing my opinion. You know, taking distraction from mundane work to talk technical things. I am not trying nor do I need to prove how I do my development/design and what tools do I use and frankly I do not give a flying hoot what others might think about it. I run my own business after all.

As to you particular point of language being unsafe because it allows typecast pointer to integer: allowing unsafe features in my view does not make language unsafe as long as it provides safe way of doing things as well. It is called flexibility in my book.

Security wise: could I've done better? Sure. Anything could be done better but you've probably heard about the law of diminishing returns. Does the fact that I use language that have unsafe feature automatically make my software unsafe if I do not use said features - big fat NO. Even if do use such feature (and rarely but sometimes I do for the sake of efficiency) it does not really change the main point.


Why allow programmers to make mistakes?

For a philosophical counterpoint: Why allow anyone to do anything that might possibly be incorrect, harmful, or otherwise perceived by some to be negative?

I've looked at a lot of the talk surrounding "safe/secure languages", "safe/secure programming", etc., and yet every time I've heard people preach about the benefits, I feel like I just vehemently disagree. At a very deep and fundamental level, I feel like somehow we are sacrificing something more important in the pursuit of this "safety", this seemingly overpowering desire to make everything completely safe, mindlessly constricted, and stifling. It's not just software; the whole "war on terrorism" irks me in the same way. I imagine a "completely safe" world, the ones these "safe software" proponents appear to be striving for, would be rather dystopian.

A quote that immediately comes to mind is: "Freedom is not worth having if it does not include the freedom to make mistakes."


You are right and I agree with you, but I believe the point of the blog post was about the language designer saying "You cannot do this because it is too dangerous."

That is different than an application designer saying "Within the confines of this application, I will never do this so flag and prevent if it seems like it is."


It's the exact opposite of "you cannot make language X safer", only if you define that as "given enough time and resources, you could make a proof that your program is correct".

But that's not a safe language; that's a safe work. ANY programming work - regardless of language - can be made safe via such expensive means, which puts this square into the space of "argument ad absurdum". You could argue that brainfuck or assembler can be made safe via these means, but everyone would rightly ridicule you for it.


1: Do you agree that languages can be safer or less safe to write code in?

If yes to 1, 2: Can you seriously defend the claim that C is not well to the "unsafe" end of the spectrum? And that, equivalently, there are a number of safer languages to write in?

If no to 1, 2: How can you actually justify that? Given the same programmer writing in Haskell and Assembler, with the same amount of effort, all code written by that programmer will have the same security level? Really?

It seems to me the only way to carry the argument that "It's not a problem to write security code in C" is to reduce all languages to "safe" and "unsafe", define "safe" somewhat absurdly as "impossible to write insecure code in", then claim that since all languages are in the "unsafe" column, it's just fine to use C.

The mere act of spelling that argument ought to nearly suffice to refute it.


I think we need to do everything we can to make it so that the tools that regular programmers use aren't dangerous and/or insecure by default; otherwise, we're just playing security vulnerability whack-a-mole. Better to solve problems at the source than try to educate people how not to use a tool incorrectly. This is really hard because there are so many widely used tools and abstractions that were designed to be as powerful as possible rather than to be easy to formally verify for correctness. I feel like the whole software industry is built on a wobbly foundation, but it's hard to part with tools we know because they're useful and they work, even if they do break rather often.

A good start would be to stop using C and C++ for new projects, and generally try to eradicate undefined behavior at all levels of software. There's a lot of really great software written in C and C++, and it would be a huge undertaking to replace, say, the Linux kernel with something written in Rust or Swift or some language that hasn't been invented yet. I think the eventual benefits may greatly outweigh the costs, but it's a lot easier to sit in a local optimum where everything is comfortable and familiar than to set out on a quest to, say, formally verify that no use-after-free errors or race conditions are possible in any of the software running on a general-purpose computer with ordinary applications.


"But I disagree that it's the responsibility of the language to keep potentially dangerous tools out of the hands of developers."

That is pretty much entirely the point of higher level programming languages.

Like preventing you from allocating and freeing memory on your own, because you might screw it up.

Or removing pointer arithmetic.

Or reducing the scope of mutability.

Or preventing access to "private" object variables.

Many programming language features are basically guards to make it less likely you cut your hand off.


The ammount of unsafe blocks is pretty small compared to the rest of the code. Which means those parts can be reviewed more extensively. If anything your argument is pretty goofy.

Dangerous? No. It's just not practical to control in the same way it's not practical to control who has access to compilers.

Given the way modern IDEs work, unrestricted compile time evaluation would make it dangerous to even open potentially malicious code.

There kinda can in the sense that you could just make every function `unsafe`, but that's a bit of a giant blinking red flag.

Telling people to stop using unsafe is much easier than telling people to not have undefined behaviour.

C developers like telling themselves that only people with bounded rationality make security critical mistakes. All the skilled C developers have ascended beyond the mortal realm and would never let themselves be chained up with crutches for the weak like affine types or overflow/bounds checking.


> Nope, you can very much make them safer. It might not make them the "safest" it can be, but it can totally make them safer.

This one might come down to conflicting definitions of "safe". The author seems to define safety purely in relation to the number of language features with which you can contrive to shoot yourself in the foot. You seem to also take into account the number of language features you can use to make it harder to shoot yourself in the foot.

I think there's some value in both definitions. The author's perspective captures a lot of what worries me about languages like C++. But it also fails to give C++ any credit for all the language improvements that have materially improved the overall quality of most C++ code over the past couple decades.


i was responding to the claim "It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++". of course, the feasibility part is questionable.

It's the only way to really solve the problem. Simply creating a safer alternative won't help, they already exist. The real problem is the vast ocean of already existing critical unsafe code

For example, the specification framework for the C language by Andronick et al8 does not support "references to local variables, goto statements, expressions with uncontrolled side-effects, switch statements using fall-through, unions, floating point arithmetic, or calls to function pointers". If someone does not want to use a language which provides for bounds checks, due to a lack of some sort of "capability", then we cannot imagine that they will want to use any subset of C that can be formally modelled!

that said this isn't an essay about safety, it's about the emotional appeal of a sort of false simplicity that some programmers are prone to falling for and pointing out the inherent inability of a couple projects (in synecdoche for a whole shit ton of other projects) to live up to promises of that mirage.


I agree that minimizing the unsafety is good. That doesn’t change that wrapping a small amount of unsafe code is wrapping unsafe code.

I don't get what you're arguing - this has nothing to do with the developer mindset. He just has to flip a compiler switch and vulnerabilities won't be trivially exploitable anymore.

It's totally irresponsible.

next

Legal | privacy