A 'user' can do all of the things you mentioned, e.g. "insert random HTTP headers", given that they have access to all of the stuff your code does too, so any code, of yours, that runs outside of _your_ systems, _is_ in "enemy territory", as none of the code _inside_ your systems can trust anything from 'outside', even if it possibly came from your code.
Yeah, but that is always a threat with any code ever written by anybody other than oneself. The only assurance against that is if one writes their own code compiled with their own compiler and run on their own fabricated hardware. Oh, and implementing their own security algorithms. Which means any data exchange would be impossible.
> It runs code fetched from a trustworthy origin (your server), so it is not enemy territory.
We should define terms before arguing. Enemy territory is anything you do not directly control. So, as a developer, you do not know if the user's agent is running your code from your server or something compromised. Assume the worst. Anything exiting the user's agent must be cleaned.
> Executing an action against the user's will is a security issue
Non-sequitur. Unless you're saying the `text` parameter could somehow execute code? It can't.
Considering the worst reasonable scenario, that this `text` parameter is sent directly from user input: so what? It may not be great practice, but it's not a security issue. Clean it server-side, which is what should be happening anyway, which the article fails to mention.
Considering the worst unreasonable scenario: the `text` parameter is compromised by a hacker somehow. Well, you're dealing with a far worse situation than could be handled by cleaning input client-side. Better to ensure input is secure... on the server side.
But, maybe I and others here are wrong. Assume many of us do have a worrying misunderstanding of the fundamentals. For the sake of the health of the internet, step us all through this scenario where a secure server side does not save the day, but these methods do.
If it sends the code back to the server to get executed, I could see a slew of interesting security opportunities here. I wouldn't exactly allow someone with Firebug and working knowledge of C run executable code on my server.
Cool idea! Hopefully I'm misinformed about the security thing.
You essentially have a gateway into a very large chunk of code that was most likely not built with security in mind on the parsing side, on top of that you are guaranteed write access to some file system.
I'm guessing we have different 'threat models' in mind.
From my perspective, I know _I_ am a moral and ethical person and therefore won't "execute an action against the user's will".
But, also from my perspective, even if "that action is allowed according to the user's credentials", I can't tell, and thus my server-side code can't tell, that a 'user' is a real person or even a legitimate user of my site or app.
The comment I was replying to claimed that "The user agent is ... is not enemy territory.".
But what came to my mind on reading that was user agent's also (commonly) perform 'card testing' and 'credential stuffing' and, even if I trust that I can securely give them access to my front-end/client-side code, I have no way to know whether they're running that code. And, even if they're running my code, there's _still_ room for malicious or nefarious action on their part.
I was NOT disagreeing with this (in the comment to which I was replying):
> Yes, the server must assume that enemy agents also exist. But it should better not deliver one to all users.
Interesting notion, but I'd be leery of the security implications of running code on my server that the client had its hands on. Which is not to say it couldn't be secured properly.
The possibility for Remote Code Execution vulnerability from an unauthenticated user. This should be offloaded into a memory safe language, ideally by a parser that's been battle tested.
That's totally different, at least out of the box. The use-case seems to be running user-generated scripts that aren't known in advance and can be added/edited/ran in a self-service manner.
The usual way to do this is get a Python interpreter, sandbox the hell out of it on your server, and then run the untrusted code but this obviates the need for security paranoia quite a bit since it's running in the user's environment.
The dangerous thing is that lots of things that look like a library call to the unwary developer actually call out to other processes (visible if you run strace). This vulnerability will not be limited to web servers, either, and there will likely be systems that escape or unescape parameters in ways that evade app firewall rules.
Would I be able to use this to run untrusted code from my users? I have had an idea for a game I'd like to eventually build where users write code to control their character. If I send the code to my server running Tork, what are the security risks?
Nice answer - but I don't think that's what he was asking. I think the suggestion was that he's got his code secure on external services - so what is there to lose?
Indeed. In fact, I'm an avid programmer, but I'm not a programmer of that stuff. And I'm somewhat fearful of doing it wrong and creating some kind of massive security nightmare for myself or for my customers.
yeah i thought that, too. but assuming the loaded code has no access to any of those things either, then i'm not sure what the concern is. though if you gave network access to your code and it loaded other code from the internet which inherited those permissions, that's pretty terrifying (especially since it's not some edge case) - i assume this has been thought through, since it's so obvious.
As long as a user can only run the code they (or the page author) has entered by themself, I don't see an issue. It gets more problematic if you allow users to store modified cells and link to them. Then a malicious user could write a cell amd trick another user into executing it.
No, the most insecure things invented by people take URL parameters from unauthenticated requests and run them on the command line.
I didn't audit the code or anything, but all the request processing is gated by a function that requires HTTP basic-auth, which is at least hard to screw up. To accidentally add a function that bypasses auth, they'd have to write an entire new request handler chain.
That said, I noticed the same thing (popen), and if I was going to integrate this with our product, I'd hardcode the command line.
reply