Yep I checked out overlay networks, its definitely a very cool project. However it also seems pretty complex to host. I think they are different use cases
Mind providing more details on your setup here? I tried throwing together an overlay network using OSPF, but never really got it off the ground. I'd love to hear what you've got here!
Awesome article and really well written. I am curious why people choose to go with overlay networking. To me, it seems like an extremely complex solution to a non-problem (source, I run a 700+ container cluster without any overlay networking).
I recently did a deep dive into network overlays (like zerotier); mostly because I have a few self hosted services, and overlays seem to be the hot topic in the self-hosting community. I've come away with the feeling that most people using overlay networks at home are doing it completely wrong and opening themselves up to a world of hurt.
First, network overlays are not easier to setup than VPNs. Installing and configuring a network overlay client on every device is much more work than setting up a single VPN tunnel for every network you want to access. Overlay networks are just easier to plan because there is no planning. But they're not easier to implement.
Second, and far more important, meshing all your devices into a single flat network is dangerous. There is a reason why networks are designed with isolation strategies. Introducing an overlay into your networks breaks down these barriers for you, but also for an attacker.
The only overlay network that has built in firewall capabilities is Nebula. When I started configuring its firewall rules I found myself just recreating my existing segmented networks, but in a much more obtuse way. Instead of configuring a central firewall, I was configuring firewall rules on each device.
After all my research, I'm still running the same segmented network I was running before my overlay experiments. But I would like to give some praise to both Nebula and Yggdrasil. IMHO, these are the two most existing projects coming out of this space right now.
Have you taken a look at Nebula [0]? Might fit your needs. It also uses the Noise Protocol Framework but adds the bells and whistles on top needed to synthesize an overlay network like you want. MIT licensed too fwiw, full self hosting. Worth a look at any rate, WireGuard is much lower level though I'm sure it could be built upon for the same purpose.
> If so, that's probably the simplest overlay solution of all the aforementioned
(I work on Weave)
Weave Net also lets you create a Docker overlay network using VXLAN, without insisting that you configure a distributed KV store (etcd, consul, etc.). So I would argue Weave Net is the simplest :-)
This is a great blog, goes really deep on the technologies you looked out. Out of interest, did you consider OpenZiti as part of your test? I work on the project.
OpenZiti is an overlay network with similarities to Tailscale but fully open source, not built on wireguard, and focused on connecting 'services' rather than 'devices' with many zero trust networking principles built in. It also includes a suite of SDKs so that we can build private networking into the application stack and have zero trust of the internet, LAN and host OS network.
hey. thanks for the feedback. Yeah, it's more of an exercise for fun but it can also be done w/ high bandwidth dedicated servers as well. It's good as well if it's heavily dynamic stuff that needs really low latency and you need to execute something custom on the edge. Otherwise MaxCDN.com or a similar service scales better. I'm looking forward to building more custom stuff into our edges at MaxCDN.com so feel free to throw out any ideas. This was used to create a presentation for a docker meetup.
An overlay network is not a requirement. The only requirement is that pods (collection of containers) should be able to communicate with each other directly without NAT. Each pod gets an IP address in the container network.
Overlay network technologies (flannel, weave, calico, etc) are popular but they aren't mandatory. You can implement it using hardware switches and VLANs if you wish.
This is very cool. I'm wondering if decentralized overlay 'mesh' networks will become more prevalent in the future. Overlays have obvious benefits for multi-cloud setups if latency isn't an issue for the services you run through the network. I can imagine this becoming a more popular technique in the future as cloud instances become cheaper and more people are willing to make the performance trade-off for convenience. Additionally it can be a great pattern to fight vendor lock-in.
I run a similar concoction for my 'home network' which consists of various mobile devices like my Android phone and some cloud servers, but instead of ZeroTier and I use CJDNS in combination with Consul. The code for it is on github in the vdloo/raptiformica repo. I ran into various issues with the difference in latency between nodes. I think most people run Consul in a very homogeneous environment (like in one datacenter), but maybe perhaps the differences between using it cross-cloud is not enough to cause problems. I'm wondering if there were some Consul settings that the author had to tweak (and how) for stability and if there were any unexpected issues.
One thing that caused me problems with Consul on overlay networks was an ARP cache overflow. DigitalOcean also ran into that running Consul at scale in their DC if I recall correctly: http://youtu.be/LUgE-sM5L4A I noticed that if I put enough Dockers on one host (like 50 - 100) in an overlay network and tried to run Consul on top of that things would start to overflow, presumably because of all the network abstraction and redundancy. I'm wondering how many machines the author had in one Consul cluster and if they tested to what amount of nodes this setup could scale.
I hope all of these Docker overlay networks start using the in-kernel overlay network technologies soon. User-space promiscuous capture is obscenely slow.
Take a look at GRE and/or VXLAN and the kernels multiple routing table support. (This is precisely why network namespaces are so badass btw). Feel free to ping me if you are working on one of these and want some pointers on how to go about integrating more deeply with the kernel.
It's worth mentioning these protocols also have reasonable hardware offload support, unlike custom protocols implemented on UDP/TCP.
I built Wormhole Network https://wormhole.network with the idea of making remote access very easy and as secure as possible.
Disclosure: This is SaaS and I've built it.
Wormhole builds an overlay network where you can run any L3 protocol really. By default we provide DHCP for IPv4 within the 100.64.0.0/24 (yes, just a /24 by default as it suits most users, it can be customised or even disabled under request). We have chosen this address space to increase the chances of non-overlapping with your own networks.
The advantage of running an overlay network like Wormhole are:
- No need to open ports anywhere or do any inbound NAT or PAT. All traffic is outgoing. By default UDP, but the protocol would fall back to 443/TCP if needed.
- The above means it works pretty much anywhere with an Internet connection that lets you browse the web.
- Your devices' IP addresses inside Wormhole could be always the same, regardless of where they are. Think of migrating your servers to a new hosting? Keep the same IP. Do you team mates move frequently, work from home at times or even from their favourite coffee place? No problems, they'll keep the same IP address.
- Full access between devices inside the network. It works like a real LAN. No need to open ports to reach out to your development server nor leave any other services reachable from the internet. You could lock down all inbound access from Internet to your servers and still reach them through Wormhole.
- All traffic is encrypted. Note: We don't roll our own crypto. We rely on SoftEther's (see below).
- No need to configure a VPN with your cloud/hosting provider, provision VPN hardware nor anything like that.
- Multiplatorm Linux, Windows and macOS.
- It all runs on free, open source software: SoftEther https://www.softether.org so you can audit the software (and it's not ours, people are using it all over the world for VPN)
The architecture is based on central servers that route the traffic among the peers in your network, hence why full connectivity can be accomplished always with only outbound connections. It is important to choose in which server you want to create your connection, so the latency is as low as possible.
We currently have a few hundred users and are looking into making the product better by listening to your feedback. We have a free tier without time or traffic limits, available in three regions (US East, Netherlands and Singapore); it just has user limits. No credit card needed to use it.
I'll be extremely happy to receive criticism, suggestions and any other feedback in general here or directed to pedro /at/ wormhole.network
An over-overlay is almost never the right solution. If you want platform-independent networking you should use an API shim layer that configures the underlying VPCs the way you want.
While OpenZiti can do P2P connections, having a overlay in between gives you many advantages incl. outbound connections at source/destination to circumvent NAT/port forward/inbound ports etc.
What's cool about it is that we built the overlay to do smart routing so it calculates the lowest latency paths and rebuilds, thus, as you have more nodes, you have the ability to actually circumvent BGP and deliver lower latency.
We welcome the shameless plug, everything open source is our friend! zrok solves the peer address issue by providing the reverse proxy so that you can share a URL for public share or endpoint and URL for fully private share.
I thought about doing something similar, but with Slack's Nebula or with ZeroTier (v2, which is not released yet). They're specifically designed for this kind of overlay network if I'm not mistaken, taking care of node additions and removals automatically. Nebula with fixed "lighthouses", ZeroTier with a decentralized KV store.
If you like tsnet, you will probably like the open source project I work on called OpenZiti. Its an open source overlay network that allows you to embed zero trust networking and SDN into almost anything - https://github.com/openziti. This includes tunnelers for all popular OSs as well as SDKs for many languages incl. Go, Java, Python, C, C#, Node, ect ect.
Here is another (sort of), OpenZiti - https://openziti.github.io/. OpenZiti provides a mesh overlay network built on zero trust priinciples with outbound only connections so that we do not need inbound ports or link listeners. Similar to TS, you can host anything anywhere and has options to deploy on any popular host OS or as a virtual appliance.
What makes it realluy unique though is that it can actually be embedded inside the application via a suite of SDKs. Yes, private, zero trust connectivity inside an application! That provides the highest security and convenience as it can be completely transparent to the user!
Disclaimer, I work for the company who built and maintains OpenZiti so I am opinionated.
I've got a strong interest on overlay networking solutions like Weave, but I'm not an expert on Docker and other container solutions.
What's the new thing here? If I understood correctly, it seems that you can connect your Docker host to an overlay network, so your containers can access other containers and resources through it. Am I correct to think this facilitates orchestration of the containers' network?
Disclosure: I am behind https://wormhole.network which could be seen as some sort of Weave competitor, but it's not. It covers other use cases, even though there's some overlapping e.g. overlay multi-host networking for containers https://github.com/pjperez/docker-wormhole - it doesn't require changes on the host itself, but can't be orchestrated.
reply