Pico is an open-source alternative to Ngrok. Unlike most other open-source tunnelling solutions, Pico is designed to serve production traffic and be simple to host (particularly on Kubernetes).
Upstream services connect to Pico and register endpoints. Pico will then route requests for an endpoint to a registered upstream service via its outbound-only connection. This means you can expose your services without opening a public port.
Pico runs as a cluster of nodes in order to be fault tolerant, scale horizontally and support zero downtime deployments. It is also easy to host, such as a Kubernetes Deployment or StatefulSet behind a HTTP load balancer.
The instructions there say that it will create a cluster with three nodes, so while it is using docker compose I am guessing it is still using kubernetes
The three nodes are just three containers on the host where you're running docker compose. Docker compose only works with a single host except when deploying to docker swarm clusters. I'm not familiar with swarm though so I couldn't tell you what versions of compose support it and how good that support actually is.
Say I have a pico cluster with a few service nodes and a few upstream clients register themselves, and then I deploy a new version of the service nodes where all existing service nodes are taken down and replaced.
Can the client still talk to the service nodes? Is this over the same tunnel, or does the agent need to create a new tunnel? What happens to requests that are sent from a proxy-client to the service nodes during this transition?
Or at a much higher level: Can I deploy new service nodes without downtime?
When Pico server nodes are replaced, the upstreams will automatically reconnect to a new node, then that node will propagate the new routing information to the other nodes in the cluster
So if you have a single upstream for an endpoint, when the upstream reconnects there may be a second where it isn't connected but will recover quickly (planning to add retries in the future to handle this more gracefully)
Similarly if a server node fails the upstream can reconnect
Related -- we also built a simple (but not production-grade) tunneling solution just for devving on our open-source project (multiplayer game server management).
We recently ran in to an issue where we need devs to be able to have a public IP with vanilla TCP+TLS sockets to hack on some parts of our software. I tried Ngrok TCP endpoints, but didn't feel comfortable requiring our maintainers to pay for SaaS just to be able to hack around with our software. Cloudflare Tunnels is awesome if you know what you're doing, but too complicated to set up.
It works by automating a Terraform plan to (a) set up a remote VM, (b) set up SSH keys, and (c) create a container that uses reverse SSH tunneling to expose a port on the host. We get the benefit of a dedicated IP + any port + no 3rd party vendors for $2.50/mo in your own cloud. All you need is a Linode access token, arguably faster and cheaper than any other reverse tunneling software.
This is a good candidate for the list. Most solutions don't really differentiate themselves much, but being designed for production environments is certainly unique amongst the open source options.
Pico is a reverse proxy, so the upstream services open outbound-only connections to Pico, then proxy clients send HTTP requests to Pico which are then routed to the upstream services
So as long as your browser can access Pico it should work like any other proxy
Love this! Have been doing something similar with HAProxy + Cloudflare Tunnels, but would love to move off it at somepoint. Super curious to give it a run soon. Thanks for sharing!
I have been considering cloudflare a bit, but it’s basically a mitm no?They decrypt your entire traffic then. It’s a lot of trust to put in cloudflare…
You could try IPv6.rs (shameless plug). We provide a routed IPv6 IP and reverse proxy for IPv4. We made it easy to run servers with Cloud Seeder [2], our open source server manager.
What I want is a transparent reverse proxy for both IPv4 and IPv6. Ideally it should work with encrypted SNI and ECH, using a static IP, because this is where the internet is going and anything else is probably a dead end I would like to avoid investing time in today.
Ideally, it has some simple firewall IDS/IPS capabilities (limit destination ports, limit source IPs…).
My threat scenario is, once someone has my home IP, they can cut off my internet very easily, just brute force traffic to my IP will clog my internet access.
The same would work via the above described reverse proxy, but I can diagnose it and turn off the proxy. My self hosted services will be down but at least I have Internet. If my home IP is known, there isn’t much I can do… My ISP doesn’t rotate the IP of a user very often (think months).
Currently I feel that cloudflare tunnelling is less worse than the above described risk, but it’s far from ideal, hence looking for alternatives.
IPv6.rs doesn't work with ESNI because you'll have to decrypt the encrypted packet to read it. Cloudflare decrypts your traffic so it can read it.
> If my home IP is known,
IPv6.rs hides your home IP. The only exposed IP will be the IPv6 IP you receive from IPv6rs. The reverse proxy proxies to your IPv6 address, so your home IP will never be exposed (and technically you could change the IPv6rs IP if you wanted to at ANYTIME).
If you're interested in giving it a shot I can give you a coupon that discounts significantly!
Im sorry if it’s a trivial question, but why does a “dumb” forwarder have to decrypt the packet? I only need to tunnel/forward it, static destination IP, there are no decisions taken on the base of the SNI as far as I can tell.
This is something I’ve worried about, but I’m not very knowledgeable. Say I have a service that’s receives traffic only from a trusted network segment and is behind a firewall, but I need to access the service for debugging purposes. Is there a canonical way to do this other than pushing logs out to some accessible location?
Agreed, thats why for production workloads it should be done with hardening and auth. Ngrok does that, as does Cloudflare. The version my company created does that too - https://blog.openziti.io/zrok-frontdoor
Yep I checked out overlay networks, its definitely a very cool project. However it also seems pretty complex to host. I think they are different use cases
zrok is a similar capability (though it can potentially do a lot more). OpenZiti is definitely a more complex project. In fact, zrok was built on top of OpenZiti.
We did this as Ziti provides a platform to develop secure by default, distributed applications quicker, which is why zrok has been built by only 1 developer across about 18 months and is almost feature parity with Ngrok (which has been developed by many people for almost 10 years).
- If your trying to access a customer network (such as for BYOC), exposing a public port in the customer network is likely a no-go (or would require complex networking to setup VPC peering etc)
- The Pico 'proxy' port doesn't need to be public (and in most cases won't be), such as you can only expose to clients in the same network (which is one of the benifits of self-hosting)
- The Pico 'upstream' port (that upstream services connect to) will usually need to be public, but that can use TLS and has JWT authentication
Why on earth would you reuse such a long-established name? Pico has been around for 35 years and many distros include it by default (or symlink `pico` to nano, anyway).
Post GPL3 Apple replaced GNU Nano (itself a Pico clone) with UW Pico. A step backwards perhaps, but nano is a symlink to pico. I'd steer clear of anything that looks/sounds like 'pico' including 'piko' which doesn't seem to clear anything up.
From what I could see of FRP, it only runs a single server node so isn't suitable for production traffic (which needs to be fault tolerant, scale horizontally, support zero downtime deployments...)
Piko is also designed to be easier to host, so can be hosted behind a HTTP load balancer. That does mean Piko is currently limited to HTTP only, but that seemed a worthwhile tradeoff to make it easier to host
Upstream services connect to Pico and register endpoints. Pico will then route requests for an endpoint to a registered upstream service via its outbound-only connection. This means you can expose your services without opening a public port.
Pico runs as a cluster of nodes in order to be fault tolerant, scale horizontally and support zero downtime deployments. It is also easy to host, such as a Kubernetes Deployment or StatefulSet behind a HTTP load balancer.