If something has a connection to the Internet--even if only through a proxy--we must treat it and all devices on the same network as if they are connected to the Internet. Because they are!
If Internet connections worked like radio broadcasts (one-way) then we'd be fine but they are inherently two-way connections. That means that for every outbound request a hole is opened up for return traffic. Even if you think, "that doesn't count" one must still account for the fact that PCs get compromised via side channels all the time and are subsequently used as pivot points for further attacks on any attached networks.
We need to stop treating intranets like they're somehow "safe" or special places apart from the Internet. They're not.
Always treat every device on any network as if it is being placed directly on the Internet because most of the time it really is. NATs and proxies aren't security tools even if they pretend to be.
The safest thing for most end-devices is not being servers in the first place.
Now, if the device is cracked through one of its client software (NATs don't prevent that), then it could start up a rogue server, while if it were behind a NAT it couldn't. That's no worse for the machine itself (it's hosed anyway), but you could argue that's worse for the rest of the network.
I think it's not. Botnets are annoying and dangerous when they act as clients. Spam, DDoS, automatic attacks are all client behaviour. Even if you want server behaviour, connections don't have to be initiated from the outside. The compromised device just have to know the relevant IP, and initiate the connexion itself.
Finally, if you want to block incoming connections anyway, a plain firewall is cleaner. At least FTP will work.
I go by the rule that if something is not secure enough to plug directly into the Internet, it is not secure. That doesn't mean I'll necessarily do that, but that should be the bar.
The only exception is special purpose backplane networks that are designed explicitly to be isolated. These are basically data busses for clusters, not user-facing networks.
if we're branching in to 'what other devices can be compromised' then that's a concern for any network 'private' IP or not. for example, even on a NATted v4 network if you get the right device (say if it's 'port forwarded', or you get malware on it another way (social engineering) you can pivot that way to another point in the network.
you can supply all the ACLs and firewalling to your heart's content on either private or public, it's just that public addresses have a heck of a lot less shitfuckery when you actually want to do useful things across the internet
Of course. I'm just saying that firewalling and end-to-end security are better ways of doing that than routing and ambiguous (rfc1918) addressing. Never trust the network, lest you end up making yours soft and chewy on the inside.
You are correct that if the bad guy co-locates at the same IP then it is a problem. However that then becomes an issue with the service that chose to host on a shared IP.
For other services that use dedicated IPs but spin up/down machines based on load etc it is still much more useful and secure than running a proxy with a CA that generates fake certificates, especially when you can’t update the trust root of the client device (often the case with embedded devices and those that are managed by the third party service provider)
It doesn't need to be on the internet to be catastrophically exploited. Most buildings have zero defense against tailgating, let alone sophisticated covert entry. Most organizations contain people who can be tricked, bribed, or accidentally hire an adversary.
Disconnection can stop drive-by malware, people trawling for additions to their botnet collections. Someone who wants to launch a coordinated attack will have no problem getting behind the firewall or across the air gap at enough interesting networks to cause serious harm.
The point isn't to actually expose your internal services. It's to not assume that attackers can't breach your network perimeter. Internal traffic should be treated with the same level of trust as external traffic, that is, none at all.
There will always be a way in a network. The question is can we eliminate the network?
Now that computers in a LAN/WAN barely 'talk' to each other, and the apps are rarely on the same network as the users' computers, is there enough value in networking the computers relative to the risk of creating a valuable target for attackers?
Consider for example what the business case for attackers would look like if their target was an island of one.
Of course networks have benefits such as simpler ways for IT or MSPs to manage devices and updates, shared auth structures, directories and policies, etc. If we look at all those benefits, and consider the alternatives, are they worth the risk of a network in today's topologies with today's threat vectors?
They dont have to be connected to the internet directly, the attackers can move laterally across segmented network boundaries or deliver a USB or backdoored device implant for their initial access.
Agreed. What worries me is this statement: “While it is generally better for users to avoid the use of a forwarded agent altogether (e.g. using the ProxyJump directive), the agent protocol itself has offered little defence against this sort of attack.”
Which I cynically read as “Hey, there is a better way, but we’re going to try to make the worse way slightly more secure, here’s how!”
Some of the described behaviours strike me as opening up entirely new and hard-to-detect attack vectors.
This is a trusted, internal, private network. The only one who could do this is the application itself, or something rogue on our network. If something were running rogue on our network, there'd be worse things it could get access to.
No, this is more about removing the idea that there is any safe network.
Don't assume internal network (or leased circuits or whatever) are secure.
Make smallest possible security perimeters - on servers you design/deploy, that would be often the server or even specific applications. With devices you can't secure better (like printers etc.) make small islands that can talk only with appropriate gateway device. etc. etc.
> Deploying proxies has costs - it obscures the endpoints of communication from sensor software and makes anomaly detection more difficult.
It also establishes a single machine that sees all the plaintext of everything that should have been TLS encrypted from every machine on your network. It's like attacker catnip.
If Internet connections worked like radio broadcasts (one-way) then we'd be fine but they are inherently two-way connections. That means that for every outbound request a hole is opened up for return traffic. Even if you think, "that doesn't count" one must still account for the fact that PCs get compromised via side channels all the time and are subsequently used as pivot points for further attacks on any attached networks.
We need to stop treating intranets like they're somehow "safe" or special places apart from the Internet. They're not.
Always treat every device on any network as if it is being placed directly on the Internet because most of the time it really is. NATs and proxies aren't security tools even if they pretend to be.
reply