There is very little reason to believe that escalated privileges are not possible within a given virtualized environment, at least with current technologies.
Containers are great for development and production on your own infrastructure, or shared infrastructure like GCE or AWS. Security can be had from doing inspected builds, self signing, etc.
For consumers, however, it's a completely different ballgame.
All those Docker commands usually run as root or something equivalent to root. So a container breakout could lead to root on the host system.
I think kordless is claiming that using Docker here could increase the severity of an attack; otherwise it doesn't seem like putting up another barrier could hurt security, even if it is later broken.
My claim would be applied to all virtualized environments, including containers and VMs - not just Docker. Microkernels have a decent shot at keeping the security issue at bay, but even then it can't keep them out forever.
Everything falls to hacking eventually. That's the nature of it, at least till now.
I would note that Docker is primarily a tool for developers and operations folk who are also the author of the software being run. Docker itself is not the risk here, but using it for some use cases may very well be.
By the time serious hacking is an issue through Steam, I'd expect containers will be that much better anyhow. For all they can be criticized, and regardless of whether you think some other approach would have been better, they're getting the "trial by fire" treatment. By hook or by crook, in another year or two I expect they'll be as secure as you could ask for.
I'm primarily concerned with escalated privileges in what will become standardized gear for VR and AR viewing...if we start down the path with containerization like I expect we are.
Worn most of the time while a typical user is awake, goggle hacking will represent a very large target with widely varying rewards.
Isolated or separated (from the host system). Both words will work when context is clear. Or it can confuse, when context was not specified, which is in my case.
What I actually meant is that I'd rather run an isolated (separated) process from other processes and the file-system space (and the other isolation features which cgroups are giving us), not meaning isolation from the host system.
With the cgroups/namespaces it's a process isolation (or separation, whatever wording you prefer). In the Linux kernel documentation they also use isolation wording. ;-)
But security-wise, running it in a container is better, than running it without isolation. IMHO.
And of course, no one asks getting this image built by the 3rd party, since the Dockerfile is open, just build it yourself ;-)
reply