Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Agreed. When in security classes we were taught to use e.g. execve() instead of system(), it wasn't because shells were thought of as particularly vulnerable. You just want to use a tool that has the minimum possible feature set, so you can be sure that no one malicious will be able to trick you into using even correctly-functioning features (e.g. through shell injection).

Sort of a special case of the principle of minimum privilege, when applied to the feature set of your tools.



sort by: page size:

I disagree, shelling out to another program is a perfectly reasonable design decision, one that has paid off in development speed. The people that run from their OS like it's something to be scared of are really missing out on the benefits of the their OS. I have built whole pipelines of essentially shell commands and I have never had a command injection vulnerability because I follow one simple rule:

I filter and sanitize input as soon as I receive it from the user. There are an infinitesimally small number of instances where you need to process anything but [a-zA-Z0-9]+ charset in a shell pipeline.

Once you have a pattern down for safely executing commands, there is no reason to be scared. It's really a myth that you can't become "secure", and this author's blog post is a perfect example. Tell me how you will conduct command injection when you have a whitelist of a-z0-9. I need an example, instead of this fear mongering that your OS is something to run and hide from.


You don't have to spawn a shell as a seperate process. Injecting and executing code inside a vulnerable process has been done for a long time.

Because sometimes people write privileged shell scripts, and they should not be vulnerable.

The point is to get to these functionality from a limited set of shell commands, not to get to this functionality from an arbitrary executable.

> The author is making a security theater out of nothing for posing.

Again, the author is not making accusations of security flaws. I don't know how they could have described it better, but the author was going for something very narrow and specific.


Being flippant for a minute, if you want your users to have access to a box but not have a shell, a tool called "secure shell" may not be the wisest choice.

Setting the default to frankly crippling levels for the primary function of a tool to accommodate an edge case seems slightly backwards to me. Host firewalls and/or disabling the option seem to be an acceptable set of hardening tasks if that use case is relevant to you.


Would you execute random Perl or Python code from the internet? No, you wouldn't, even if you tried to sanitize it a thousand ways from Sunday. So why would you do so for the shell?

Defense must use different strategies than offense. An engineer's goal should be to reduce the problem space to something as small as possible, so he has a better chance at implementing a correct solution. Allowing the shell to be invoked is a good way to explode your exploitable surface area.

Anybody who coded in the 1990s knows to stay away from the shell like the plague, even though the syntax parsing and evaluation components in modern shells have improved considerably. Sadly this particular bug harks back to the bad days, when it was more difficult to tell how certain constructs would be evaluated.

It's not that you can't theoretically do it securely, it's just that you're gratuitously playing with fire--fire that has burned countless engineers in the past. The cost+benefit is so indisputably skewed toward little cost and huge risk that it's unprofessional to suggest otherwise.

The shell shouldn't come anywhere near network-facing software, period.


One big takeaway from this: don't use the system function, especially if you are processing user data that could include a shell injection.

Two other important points:

1. Those command line programs weren't necessarily designed with an adversarial user in mind! Sanitizing input to prevent shell escape sequences is not enough. You really need to sandbox well.

2. For some tasks, it might be easy for a malicious user to guess which command-line tool you are using. You might be getting way less depth out of obscurity than you think you are.


Security. system() is one of the most common targets for hacking (getting a shell by manipulating the string passed to system() by various means). Calling programs from the kernel directly is a lot more well-behaved. You're limited to only executing one program.

Tools that are safer then shell scripts exist, doesn't mean that end users have them installed, doesn't mean people will chose to you them either

It probably helps security a ton if you are the only binary: there is no shell, no other executables, and you can even disable fork() and exec() system calls.

Because the shell is a generalized engine for executing things, and because magic defines textfiles as candidate executables nominating either the shell or a shell-exec binary to interpret them, and because system() jacks all of this to say you can invoke a shell with all its awesome to invoke scripts which invoke a binary to parse them.

If the shell was only rsh, and if the set of binaries you can invoke was constrained, and if the network and system calls were accessed through strace() barriers which limited what you could do.. We might have less problems. Except that in the end, people don't code or script for secure execution so the context of 'what harm can I do from here' turns out to be a lot wider than many people think.


I would rather people not pipe to shells at all. It doesn't sound very secure. But if you have to do it, there are ways to avoid half-executed scripts:

foo() { ... }


Unfortunately the specs for system() and popen() require the use of shells. It was known that you had to escape arguments and control the names of environment variables, but it's entirely bizarre to expect people to know before shellshock that the values of otherwise completely safe and sanitized variables might still be interpreted by bash as code.

We shouldn't have to give up popen() just because bash was designed insecurely; we should fix or replace bash.


Because it is treating all of these intended side-effects of using a shell as though they are security vulnerabilities.

The problem is that there is a way for untrusted user input to ever touch a shell in the first place.

Seriously, I challenge you to find a language reference that doesn't decry the use of their version of system(3)---because all that does is run the given command under the user's shell.


not to mention reducing attack surface.

Unless the programs are setuid, is that a real problem? I mean, anyone who can call one of these utilities with some arguments can also call "sh -c ...", no?


So I took a great unix/linux systems programming class, http://sites.fas.harvard.edu/~lib215/ where you learn about all of the system software that you take for granted. Among other things, we had to write our own shell. There is an awful lot to consider, and most of it you are just trying to get to work properly. With regard to security, you feel like you are protected for the most part because the shell resides in userland and it's basically understood that you shouldn't trust foreign shell scripts.

Is the worry here that the code gets executed by the kernel or superuser, enabling privilege escalation? Otherwise it wouldn't be a big deal that extra code is executed by a function declaration.


If your threat model involves me running somewhere custom code on your machine, but then me not being fast enough to grab argv, you may want to adjust your threat model. Because if I can't under those conditions (even just using shell shit, never mind some really tight c code, or something that hooks into the kernel), someone else can. What a seasoned security person will lean on here is threat models and level of access, but importantly, if I'm already running custom code on your machine as an attacker DO NOT assume I can't get at your argv.

Arguably an insistence on stitching together systems out of 'do one thing and one thing only' minimalist components is precisely why Shellshock is so bad, though.

That so many things delegate setting environment variables to the system shell is what allows a vulnerability like this to be so pervasive. Why does a DHCP client pass server-originated data to a full shell? In some ways, it's a form of minimalism.

next

Legal | privacy