I miss having a job that required SSH'ing into a server to get work done. Nowadays everything is abstracted, push your code to the magic cloud and pull in 1000x other stupid little APIs. It's so boring...
And on an international keyboard it’s ~~, because ~ defaults to being a character modifier. If you nest SSH sessions, then you add more ~s. So in your fifth nested SSH session on an international keyboard the escape sequence would be \n~~~~~~~~~~.
Hmm. Is that right? I thought you could type ~~ to send a ~ through to the destination. So, ignoring the international aspect, I was thinking you'd type ~ to escape your first target, ~~ for the second, ~~~~ for the third, and ~~~~~~~~ for the fourth. (Too lazy to test it.)
Perhaps better is to set a different escape char for layers you care about.
More precisely you have to be at the beginning of the line. So right after connecting or immediately after another ~ sequence (which was at the beginning of the line) works too.
Not many people know it, you don't need to launch a SSH within a SSH session - SSH has built-in support of using one SSH server as a proxy to another SSH server. Useful for <del>hacking servers</del> accessing servers behind a firewall, or using your own server as a proxy to bypass a bottleneck in the network.
This is a good one. I've used this in the past in order to get onto IPv6-only networks as well. In my case I don't have IPv6 enabled on my home internet (thanks Verizon!) and I had a tiny virtual machine with Vultr, which at their lowest price point aren't offering IPv4 address space any more. Using a jump through another machine with both 4 and 6 address space saved me from having to cough up more money solely for a IPv4 address.
You can also use Tor as IPv6 proxy in a pure IPv4 network (or as IPv4 proxy in a pure IPv6 network), recent versions of Tor can work under pure IPv6, gaining privacy and connectivity simultaneously. The speed is not actually too bad for web browsing, although not ideal for SSH. But still comes handy sometimes, I'm used it before to clone packages from GitHub on IPv6-only servers.
How does github not have an AAAA record in 2020?? The faster people move to gitlab the better. ipv6 servers are not at all rare. Useful for individual use since you can save $1/month by dropping a useless for personal use feature.
Last time I launched a new website it took less than a month for someone to let us know that we'd forgotten to configure a AAAA and our site was inaccessible for them. And that was at new website traffic volume.
So yea, GitHub definitely knows about AAAA records and has intentionally decided not to have one. The question is: why? They must have a reason. Maybe even a good one. I'm curious.
A broken AAAA record, perhaps. Sometimes the AAAA is invalid or broken without getting noticed by the sysadmin. I've personally reported broken AAAA records before.
There are a bunch of variations on this technique, but these are the most common configs. Super easy transparent bastioning.
You can get really fancy with this stuff, particularly with `ProxyCommand`. We use it to trigger auto-login for our "Single sign-on for SSH" product at smallstep. When you have a `ProxyCommand` configured, instead of opening its own socket, OpenSSH just execs your proxy command and expects stdin & stdout to end up connected to a socket to the remote server. It doesn't care how that happens or what else happens before you get there. So we (ab)use this as a hook to check if you have a valid SSH certificate in your `ssh-agent` and, if you don't, trigger a single sign-on flow. It's nifty.
If you've never read the man pages for `ssh_config` and `sshd_config`, I highly recommend it. It's not that long and there's a lot of good stuff in there.
And in your ~/.ssh/config that's the ProxyJump directive. Adding a proper configuration for the bastion/jump host and for the target host means you can just to "ssh target".
In my case, I usually do "ssh target -t tmux -2 att" to attach to my tmux session, then when I detach it will close the SSH connection (and all of my tunnels).
Same -- and I comment them heavily, so when (like today per this thread) I learn more about said advanced options, I'm "forced" to update my "docs". Always be capturing your knowledge! :)
A combination of `RemoteCommand` and `RequestTTY` should do the trick:
Host <host>
...
RequestTTY yes
RemoteCommand tty
The problem is that this will mess up your one-off command invocations of ssh, git ssh connections, etc:
$ ssh <host> ls
Cannot execute command-line and remote command.
However, we can use `Match` blocks to get around that! There are a thousand ways to skin this cat, but one way is to use an environment variable, here `t` for "tmux". Put this at the end of your config:
Nice. `Match exec` is one of my favorite things. It's too bad the command being passed to `ssh` (if there is one) isn't available as a `TOKEN` (as far as I can tell). That would put a bow on everything.
Incidentally, I just filed a ticket because ProxyJump is not a known keyword in VScode’s SSH config autocomplete. It’s such a handy tool. Only landed in OpenSSH 7.3 in 2016 so it’s not that surprising it isn’t well known.
Just in general, look up .ssh/config and configure it.
A lot of software uses ssh to do various things. You can pull a lot of neat tricks with SSH config coming in through a bastion on a non-standard port to a host machine into a shell executed in a docker container with the right .ssh/config, aliased to "machine", and the you can use anything that can run on SSH to "machine" with all that fancy configuration. Check your favorite editor; good odds it can edit files through that mess, or give you a remote directory explorer, or whatever.
Word of warning: using jump hosts shift your mindset towards building an internal network with lax security, the crunchy outside/soft tasty inside security antipattern.
(Yes you don't intend it at the start, but the realities of your later evaluations of where to invest effort security-wise will leave the internal network to rot since you won't think of the scenarios of how it would be compromised)
It's a chicken-and-egg situation, yep. But the world seems to be moving towards zero trust networks, distaste towards complexity and opacity brought on by ambiguous rfc1918 addresses, and wide availability of ipv6.
A good trick to combine with this is ControlSockets.. you can have multiple SSH/SCP connections over the same actual SSH connections. And starting a new SSH over the existing connection is much faster than the initial connection particularly if you are on higher latency (e.g. from Australia at 250-400ms)
Then if you SSH to the same host multiple times it will re-use the connection and it persists for 10 minutes after you disconnect.
If you are using a ProxyJump like the above post this can speed up the initial ProxyJump connection.. or you can just use it with normal SSH (which is what I do) and when i want to open multiple tabs to the same machine its significantly faster.
I've always wondered what this is useful for. So it's purely for performance?
Why is it faster to establish the connection? Does it re-use the authentication from the existing ControlMaster too, so you skip the handshake? Seems like it could be dangerous if you're not careful. I guess that's why you can configure it to use `ssh-askpass` for conformation (which, btw, is missing on macOS these days).
The other neat-looking directive that I've never tried is `ProxyUseFdpass`, which tells OpenSSH to expect a file descriptor back from your `ProxyCommand` instead of using stdin/stdout. I'm not sure why it exists, but it feels like it could be a performance optimization. Particularly for `scp`. But I've never actually run into a performance problem using `ProxyCommand` so shrug?
I think it's purely for performance, but it does make a big difference. There's a lot of round trips to set up an SSH connection, and depending on the RTT to the server, that can feel like a real delay if there's another host in the connection. If you use a Proxy host for jumping, and it's got a better RTT to most servers you connect to, reusing the existing connection to it might actually make it feel like it connects faster than a direct connection from your workstation would.
e.g.
$ ping -c 2 foo
PING foo (a.b.c.d) 56(84) bytes of data.
64 bytes from foo (a.b.c.d): icmp_seq=1 ttl=55 time=3.56 ms
64 bytes from foo (a.b.c.d): icmp_seq=2 ttl=55 time=3.62 ms
--- foo ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 3ms
rtt min/avg/max/mdev = 3.564/3.594/3.624/0.030 ms
$ time ssh root@foo "echo foo"
foo
real 0m0.650s
user 0m0.062s
sys 0m0.006s
Now if I add the following to .ssh/config:
Host foo
controlmaster auto
controlpath ~/.ssh/ssh-%r@%h:%p
You can see how much faster it gets if I already have a connection to that server open:
$ time ssh root@foo "echo foo"
foo
real 0m0.032s
user 0m0.003s
sys 0m0.004s
This is especially important in some cases where you have local shell wrappers around commands that execute on a remote server with SSH.
I do this for my email, for example, where the actual programs for dealing with my email sits on a server, but on each host I have shell scripts with the same name as the original program, except all they do is SSH to the server and run the actual program there. When navigating a GUI that runs these shell wrappers, you get a lot of opening and closing of SSH sessions very quickly.
It's not even establishing the connection: it just goes through the existing ControlMaster. This gets me about ~0.5 sec for a fresh connection, and just-about-instant response for a multiplexed one.
It's not just for performance - it "reuses authentication" by not needing it. The authentication is an encapsulation, within which multiple channels get used. Usually, it's just the main "shell" channel; but occasionally it's also forwarding (-L / -R / -D), sftp; what ControlMaster does is let you open more "shell" channels when you run the "ssh" command again, instead of establishing a whole new connection.
> The other neat-looking directive that I've never tried is `ProxyUseFdpass`,
It's so you can use whatever weird communication hardware/protocol you like (serial? non-tcp satellite modem?), and all you have to do to get all of ssh's wonderful features over it, is make provide a file descriptor that works well enough (serial port; pipe you create; etc.)
Also for tab completion of directories to rsync on the remote. Otherwise you’d have to type your password on every <tab>. You can use ssh keys and agents to achieve the same, but it’s much slower, and harder to set up IMO.
Another lesser known tunneling trick is that SSH will happily act as a SOCKS5 Proxy. I've been using this trick for going on 20 years now.
Just do:
ssh -D9090 user@remote
Then, in Firefox, set it to use a SOCK5 proxy of localhost:9090 and "Proxy DNS when using SOCKS v5".
Now, when you use Firefox it is as if you are using Firefox on the machine you are SSH'd into(including DNS resolution!). This is really handy for things like accessing otherwise unreachable resources or other internal resources externally. I use it for accessing iDRAC/IPMI/ESXi (you can also tell Java to use the proxy so VMRC works as well). It is also handy to be able to put all your web traffic as originating from a remote VPS with no advanced setup required.
My closest coffee shop would allow people to access Wi-Fi only if you gave them full access to your Facebook account. DNS was the only port open to the outside world.
I've seen this on Ubiquity hardware as an option too. Apparently it requires you to "check-in" via facebook to use it, whatever that means exactly. There is also an option to login via facebook without this though.
How about no. Or hell no. If I see a request like that it is an immediate disconnect. Might as well have a requirement that they do a full anal cavity sweep before they can sell you a cup of coffee.
My guess is that it stems from lax firewall defaults. "Allow port 53 - [tcp/udp/BOTH]?" (Yeah, I know that DNS can also work over 53/tcp, but it's rare compared to the 53/udp volume)
It's not as rare as it used to be a couple decades ago. If you block tcp/53 you will find a surprising number of things breaking as record sizes have increased over the years.
RFC7766 "Recursive server (or forwarder) implementations MUST support TCP so that they do not prevent large responses from a TCP-capable server from reaching its TCP-capable clients."
i think that is fine actually. however, if i would implement such thing i would probably redirect DNS traffic to my DNS server as long as you are not authenticated :)
You habe to create a local "proxy.pac" file with the following contents:
function FindProxyForURL(URL,host) {
host = host.toLowerCase();
if (shExpMatch(host,"*.my-company.com")) {
return "SOCKS5 127.0.0.1:9090";
}
return "DIRECT";
}
Then set "file:///path/to/proxy.pac" as auto-config URL in the Firefox-proxy-settings.
Don't forget to enable DNS-Requests over SOCKS5.
For services, which you can't proxy with SOCKS5, you can use LocalForward in your ssh_config:
Match host your-workstation !exec "nc -vz xmpp.my-company.com 5222 &>/dev/null || { echo 'xmpp.my-company.com not reachable, using LocalForward' 1>&2 && exit 1 ; }"
LocalForward 127.0xcafe1:5222 xmpp.my-company.com:5222
Then add the following line to your /etc/hosts:
127.12.175.225 xmpp.my-company.com
I have been doing the same thing for awhile. It allows me to access all the services we have set up on the intranet in the office. With Tmux I move around from machine to machine and keep all the work stuff in the office.
The office is also a large university. Therefore, I have access to many additional library and other subscription services.
It’s great as a lot of the office struggled to get a work setup in place. I’m happily the only one who didn’t need Gotomypc. It also helped me save time durning this transition by not having to answer my less technical coworkers questions about getting Gotomypc set up.
Whoa! Now that's mighty useful, thanks! (Note that this works even without the ProxyJump option, but requires a sane netcat on the jumphost; older busyboxes only had a netcat)
Host my-jump-host
Hostname my-jump-host.mydomain.example
User myuser
Host my-jump-host-*
User myuser
ProxyCommand ssh my-jump-host nc $(echo %h | sed 's/^my-jump-host-//') %p
This way, autocomplete in bash would work. Typing "ssh m<tab>", assuming m* doesn't match anything else, would suggest "ssh my-jump-host" (which I often ssh directly to anyway). Then just add "-myotherhost" so the whole sequence becomes "ssh m<tab><bs>-myotherhost". Not a huge saving but I'm lazy and like autocomplete in the shell.
Let's say all machines are named in the form "vm12345," but the jump server itself is also named in the same form. How would you go to include all vm in a wildcard, but the jump server itself?
Thank you for being worth the whole reason I clicked on this link. I could have used this forever ago when I was trying to figure out how to do this to bypass a firewall (now I don't need to but this will be useful in the future).
I was tired enough of losing connections to work systems I was working on when network topology changes, or my laptop was moved, or it went to sleep, or I moved to a new computer (e.g. I'm at home) that I wrote a simple script to jump all my ssh connections through a VM at work, but with the extra step that the connection from the jump VM happens in a tmux that's named based on the desired host, and with options to reconnect to an existing session if it exists.
With the script named "go", Here's what that allows in practice:
go foo.bar - Connects to host foo.bar
go foo.bar - Second connection to host foo.bar that uses same session, so keystrokes show in both, even if they originate from separate locations, like home and work.
go foo.bar 2 - Additional param is added to session name, so you get a new connection to foo.bar.
go -list - Lists all connection sessions, and only connection sessions, because there's a special prefix to distinguish them from normal tmux sessions that might exist on the VM.
go -restore - Spawn a new terminal for all open connections. Useful for getting all terminals back after the network drops, or you reboot, or you're on your home computer instead of work, etc.
Currently this is implemented in a batch script on windows with some ugly hacks to make it work well with what PuTTY's command line options support (commands for the remote host need to be in a file you specify...), and it's pretty ugly, but I'll share if if anyone is interested. It would be much easier in bash with openssh (it's even possible OpenSSH supports enough features to do this in the ssh config).
I do have my scrollback buffer. The tmux session is on the VM, and within that session is another ssh connection to the target system. If the target system and the VM are disconnected (very unlikely without either the target or VM restarting) then sure, I might lose my scrollback (since tmux is execed with the SSH command, when it exits the session will end), but in the much more common scenario that my side loses connectivity to the VM (or I change locations), the VM still has an active connection going on in a tmux session that I'm joining.
ET looks great for a lot of things, but not necessarily this environment, which is a few hundred systems administered by multiple people, with extremely high stability and security requirements. Honestly, all the extra stuff ET and MOSH does is to give you that extra 1-2% of features to make it seamless, but at the expense of separate protocols and new software, so you don't have to expend new hardware (or in this case, virtualized hardware.
Connectivity problems almost always come from the last mile, whether that's you moving to make the last mile somewhere else or your wifi or home connection having a problem. A VM at Digital Ocean, or in my case the highly redundant and available VMware cluster at work, is much less likely to have any sort of problems, as are the servers that are generally being connected to (and it those ARE having problems, you can't rely on sessions to them being kept anyway).
For 99% of the cases, you can get by easily by just SSHing to a highly available VM, starting a tmux session for the desired connection, and within that session SSHing to the desired system. Jumping through other systems with SSH is so common that OpenSSH has features built in to support it, even transparently (where your config can just make it automatic for a class of systems). In fact, I bet there's a way to get the OpenSSH Proxying SSH server to keep the session open to reconnect to from the client if it's only the client side that had a problem, so it doesn't even require the little script I have. It's actually on my todo list to figure out the windows included OpenSSH agent stuff and see how well the new Windows Terminal works as an SSH terminal, but I haven't gotten around to it (or just use the WSL stuff, but I haven't seen much need for it yet, I'm happy to do most my dev work in vim on a dev server).
Because convincing my org to install mosh-server on a few hundred systems is a non-starter. What I have is a small shim that sits on top of SSH the provides the interesting bits I need at no additional requirement per server. It works for any server running SSH, because it is SSH, so it already works with our key distribution system and expected way of managing servers. Not only that, SSH is well known and extremely well vetted security wise. Mosh has some nice features, but it doesn't really fit the criteria in this situation.
One that has come in handy a few times: When a machine is so starved for resources that it can't even allocate a pts for you, but you want to run some forensics, use `-T`:
$ ssh -T user@host <command>
Even if you're plumb out of file descriptors for example, you can run...
$ ssh -T user@host lsof
...or whatever, and get your command output dumped to the screen, even if you don't get the niceties of a terminal.
It would be nice if you could include at least one sentence about these product names to give an idea of what they do. For other readers:
sshuttle:
> Transparent proxy server that works as a poor man's VPN. Forwards over ssh. Doesn't require admin. Works with Linux and MacOS. Supports DNS tunneling.
> Forward all traffic:
> sshuttle -r username@sshserver 0.0.0.0/0
byobu:
> Byobu is an enhancement for the GNU Screen terminal multiplexer or tmux [...] that can be used to provide on-screen notification or status, and tabbed multi-window management. It is intended to improve terminal sessions when users connect to remote servers.
We had a sys. admin who did exactly this to access his home computer to play World of Tanks. :)
He still works there in a Government Agency riddled with staff who are perfectly adapt at doing enough to stay hired and doing little enough to describe their job as a paid hobby.
I worked for an organisation that decided to stop outbound SSH for reasons that weren't adequately well explained, exceptions were painful to get re-applied, so most people just cranked up corkscrew and did precisely this.
Only challenge is that getting corkscrew compiled on Windows is a massive pain.
The core API's of Windows are so stable that if someone got it working once on WIndows NT, the executable should work for everyone on Windows 2000, Windows XP, Windows Vista, Windows 7, 8 and 10, ...
Yup - one of my smarter colleagues had a go at getting it compiled on Windows, and basically gave it up as the library dependencies were such a mess on that platform.
No one seems to have packaged up binaries for it either.
Well, if IT is running a tight ship then only corporate owned/managed machines can access the VPN and not a home domain. Those machines might be just as hackable, but they should at least have some logging in place that allow detection or postmortem analysis.
That's likely to be a firing offence, no? If I were running things I wouldn't want employees deliberately subverting my network's security measures in the name of their own convenience.
If you have to spend time wrestling the VPN while you're on the clock, that's their own time being wasted.
That's tricky, though, since there are many uses for SSH which are not circumventing security policy — blocking outbound SSH would also mean you couldn't use Git, manage servers in the cloud or other locations, transfer files, etc.
Using this to circumvent policy is exactly the kind of move which would lead to those other uses being banned and making life worse for all of your coworkers.
> blocking outbound SSH would also mean you couldn't use Git, manage servers in the cloud or other locations, transfer files, etc.
That's right, and that's how it is with my current employer.
If you need outbound SSH to work with Git, that probaly means you're working on a side project, not work. Fetching public code needed for work from a hosting site can be done over https.
Managing servers in the cloud, ditto. If managing servers in the cloud isn't part of your job description, why would your workplace enable that?
If you think that's poking holes in the firewall, you should see this javascript stuff that worms itself back over most of your HTTPS connections from countless third-party sites.
Maybe not a firing offense, but if you ran 'ssh -R' at $JOB-1, you'd pretty much immediately have someone from the security ops team messaging you to ask what you're up to.
> When debugging a complex issue on your servers, you might like to share an SSH session with someone who is not in the same room. tmux is perfect for this for terminal sharing! Here are the steps:
> [..]
> Both of you will need to SSH to the machine using the same account.
If you want, it is very easy to do view-only tmux sessions with no third-party tools required. That is, you start your tmux specifying a socket (tmux -S ...), and then have a dedicated ssh user which references it (tmux -S ... attach -r, where -r is for read-only) as sshd's ForceCommand (a 10-liner https://gist.github.com/madars/e6b957ea508be1dcd9044fd2c7096...)
Smallstep has a product[1] that's a lot like gravitational teleport. That's how we got deep enough into SSH to write this post. Teleport isn't bad. The two biggest differentiators are probably:
- Smallstep SSH is hosted (with HSM-backed private keys)
- Smallstep SSH does user & group syncing from your identity provider (i.e., you don't need to adduser / deluser folks anymore) so you don't need to do any user or key management on servers at all
We're also doing everything using standard OpenSSH, whereas teleport replaces your SSH client & server (or at least it used to, skimming their docs it looks like that might be changing). Authentication is via OAuth OIDC (single sign-on), user & group sync is via SCIM, plus PAM & NSS on the host side. So it's all pretty standard stuff.
Finally, Smallstep SSH is built on our open source toolchain, step[2] and step-ca[3]. Actually, if you want something completely free that does all of this you can just use those tools and do something like gravitational yourself. We have a blog post[4] explaining how.
This product is only a couple weeks old, so feedback is very welcome!
All the terminal software with mouse support I've found so far just messes with things, I wasn't aware that people actually use this (e.g. in vim enabling mouse support messes with the yank buffer and doesn't let me select things for clipboard copying anymore, I immediately turn that off in setups where some overzealous maintainer default-enabled it). The amount of software with support is very low anyway, so if one really wants to use a mouse, remote desktop might serve one better.
the only reason I don't use mosh by default is that ssh preserves CMD+up/CMD+down to jump to the previous/next prompt, but mosh breaks the behavior. If I could keep this, I'd be sold
If you work with a large tree where a few files at a time change, you may want to look at lsyncd - backed by inotify and rsync, syncs the local changes to remote. Not really suited for interactive edits, but if you find yourself running rsync in a loop, this is a better replacement.
* "-R" for reverse forwarding on a specific address.
That is, you can connect from the remote host to your local network. I used to do this to make my system at home accessible on my workstation in the office :) if you don't specify the address to forward to you will get a SOCKS proxy on the port specified that tunnels connections to your local network.
It is also possible to forward non local ports but ports on machines accessible to the side that is forwarding too.
There's no supposedly for me using compression. I just checked - it consistently takes 3x longer to launch an xterm without compression than with compression (15 vs 45s!).
And while we're at it, if you need remote X, consider Xpra instead of ssh -X - it's way faster, can survive connection drops, and is far more flexible.
The best metaphor is "tmux for X programs": the X server runs on the same host as the X programs, xpra then "forwards" the windows over a configurable transport (e.g. via SSH) to the client. X was IIRC intended for local networks, so forwarding it over the Internet is sloooow (both by verbose protocol and naive latency handling) - this takes care of both issues.
IIRC it's written in Python, so it runs in most places: https://xpra.org/
Not a bad tip, but using gzip compression over the wire seems pretty stone-age. The proper solution is surely to use a modern lossy video-compression algorithm. Is that possible with X?
It's not something I know a lot about. Is this where VNC steps in?
I don't know much about it, but I'm fairly sure that compressing the Xorg data stream with lossy compression is going to mess it up completely and would require a complete overhaul of the protocol to make that work. VNC is indeed the more standard unix thing (insofar as remote GUIs can be considered standard on unix-likes) that applies lossy compression to the pictures being sent over.
HostName example.url.com
User my_name
ForwardX11Trusted yes
I keep global settings behind the glob, and more specific settings for all the hosts I use.
No need to use aliases in your .bashrc or wherever. With this setup, typing 'ssh example' is equivalent to 'ssh -XCY -c aes128-ctr my_name@example.url.com' which definitely saves some keystrokes.
Host x
Hostname full.host.name.com (or 1.2.3.4)
User <myuser>
IdentitiesOnly yes
IdentityFile ~/.ssh/id_x_ed25519
I give hosts short names so you can `ssh x`
to do automatic login, I generate identities for some machines
ssh-keygen -t ed25519 -f ~/.ssh/id_x_ed25519
use ssh-copy-id to copy the identity to the target machine so it lets you in:
ssh-copy-id -i ~/.ssh/id_x_ed25519.pub x
or if your machine doesn't have ssh-copy-id (older macs);
cat ~/.ssh/id_x_ed25519.pub | ssh x "cat >> .ssh/authorized_keys"
IdentitiesOnly means it will only send that one identity for that one machine (otherwise it will try all of them, like a janitor trying to open a locker with a big keychain of identical keys)
If you always want to use a password to log into a machine, but want to be able to log in in other windows to the same machine without a password:
Host x
...
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p
this will multiplex all activity to that host through one tcp connection
you can also use Host * at the beginning of your config to do this for all hosts
to tunnel vnc over ssh to a remote mac(I do this with mac)
> Specifies the number of bits in the key to create. For RSA keys, the minimum size is 1024 bits and the default is 2048 bits. Generally, 2048 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2. For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. Attempting to use bit lengths other than these three values for ECDSA keys will fail. Ed25519 keys have a fixed length and the -b flag will be ignored.
I use a config file for all my regular SSH connections as well. But I stopped using more than one key when I could not think of a threat model that it addressed.
If the bad guys get a copy of my public key off one remote server, they can't use it to access any other remote server. They would need the private key, which is on my laptop.
And if the bad guys get my laptop, then they get all my keys; having separate private keys for separate servers doesn't help if they're all sitting next to each other on one device.
If a bad guy gets root on one server you're currently connected via ssh with, and forwarding said key via, they can also access (unless otherwise firewalled) all systems that ssh key grants them access to.
Using a private key per system or set of systems means that such an attacker would only gain access to what that key grants them access to.
My "github" key only grants access to my github repos, not to my bitbucket repos or to my DigitalOcean servers, or my Scaleway servers.
I have an SSH key for each "thing", and each key is kept in a different keyring via https://github.com/ccontavalli/ssh-ident, _precisely_ to avoid that problem.
As an example, an attacker gaining root on my Oracle Cloud free server will not be able to clone all my Github repos with it: I only use that key to connect to that server, and while I may have the github key unlocked at the same time - that's on a different keyring which the server has no visibility of.
I think the proper solution would be 1) don't use forwarding unless necessary 2) setup touch-to-use on a Yubikey so even with your scenario the attacker couldn't just use the key (it requires touch to be used).
I've (admittedly, conceptual only) a couple problem with this approach I'd love to hear folks' thoughts about, with regards to "touch-to-use".
1. I still have a need to "for s in $list_of_servers; ssh -A $s '..'; done"
How many times do I have to push the button? Is there a setting that allows that "push to allow" for a short while? Great, if so:
2) when I ssh into a system and touch the button to allow it, and the push allowed the use of they key for a short while, am I not in the same problem as before - as the system may be compromised and may be performing other operations I'm unawares of?
A solution for this would be to have multiple hardware keys for things I manage: one for $work, one for $personal, another for $github, etc. etc. but then managing them and - especially!! - their SSH agent which may be in memory but relies on the hardware being present, with all issues _that_ entails.... becomes a frigging mess.
So I'm torn... between the simplicity of a hardware key with push-to-allow... and actually being able to _use_ it _securely_.
> Is there a setting that allows that "push to allow" for a short while?
Yes there is. In a newer firmware.
> 2) when I ssh into a system and touch the button to allow it, and the push allowed the use of they key for a short while, am I not in the same problem as before - as the system may be compromised and may be performing other operations I'm unawares of?
Yes but the window of opportunity for the attacker is smaller. Also: it's your setup that requires multiple key operations within short time interval, for some people touching every time is sufficient and most secure.
Seems the ideal solution would be to trigger push-to-allow for signing requests that come in via agent forwarding, but not for local requests. I’ve been thinking about a reliable & secure way to do this. A modified OpenSSH client could enable this pretty easily by simply indicating to the agent where the request originated. But that’d require changes to both OpenSSH and to the agent protocol.
The SSH-agent forwarding model is the problem, I think; Every couple of years someone finds another semi-common scenario where it becomes vulnerable.
I have, since 2010, practiced "jumps" only - that is, if I need to connect to a non-routeable host through a routeable one ("bastion host" or "jump host"), then the intermediate host is only used as a transport, by way of -L/-R forwarding of the final destination's ssh port, or by netcat, or (since a couple of years ago) SSH's internal ProxyJump or ProxyCommand; compared to agent forwarding, this is less efficient CPU-wise and traffic-wise, but not by much; and it is as safe as connecting to the final destination directly, unlike agent forwarding.
That works for bastions, but not other use cases. One common one is pushing & pulling from git on a remote dev box. Or if you want to SCP something between remote machines without having to pull it down locally. Agent forwarding is the best way to do this (outside of possibly generating an ephemeral key pair and copying public keys around for a single use then destroying them... but ain’t nobody gonna do that).
In general I'd say SSHing from a remote box to another remote box is a non-issue since you can always use some sort of tunneling/bastioning to make that work.
It's when you want to use some other tool that tunnels over SSH -- scp, git, sftp, rsync, etc. -- that you run into trouble.
Consider a remote development instance. So I'm an engineer, and instead of developing on my local I do development on an EC2 instance in AWS (there are a variety of reasons this may be preferential or even required). If I'm pushing & pulling code from GitHub to/from this development instance I'm gonna need to either have a private key on that development instance or have agent forwarding back to my local.
Now suppose I want to copy a large database snapshot between two instances on EC2. There are obvious benefits to being able to transfer directly between these two instances, on the same private network, versus bringing the data to your local (where you might not even have sufficient disk space) and pushing it back up.
Not sure what other use cases exist, but these two alone I think justify the existence of agent forwarding.
SSH agent forwarding does have subtle security issues. But so does putting a private key on a remote server. If you're the only one who has access to the remote server, neither option is problematic. If someone else has root, and you don't trust that person, then both are problematic (and which is worse is debatable).
Thanks. When faced with similar issues, I have always set up an ad-hoc channel (e.g. created a local key, ssh-copy-id it, use it, and remove both the keyfile and the authorized_keys line afterwards). It's more work, but gives exactly the minimum required trust and no more. I've never even considered using agent forwarding for this case.
That said, I don't have to do that often; If I did, I'd probably look for a simpler way (or ignore my paranoia and use ssh agent forwarding...)
I love the multiplexing feature. We have a client who require password, ssh key and MFA. All services are behind a bastion host, which only accepts trafic from select IPs. SSH multiplexing and proxy configuration allows me to enter the password and TOTP just once instead of every time I need to access a service behind the bastion host.
Of course if the multiplex stalls or goes down so do all of the connections you have running through it. It's like screwing up Gnome Terminal or similar that uses a main server and multiple clients. Screw one pooch and you're whole session of things is similarly screwed.
Another annoying "feature" is that opening a new shell with ssh -X will not enable the forwarding if the original master connection was not enabled with it.
Indeed; however, if you have a lingering control file (because an old ssh process was killed, or there was a power failure and it was somehow not removed), it will either refuse to use it, or (occasionally, and I haven't been able to pinpoint when), would just wait there forever.
I have these shell aliases configured, to check and delete the master connections. (Often I end up with stale connections when I have a VPN up, and then just sleep my laptop.)
alias ssh-MasterConnection-check="ssh -O check "
alias ssh-MasterConnection-exit="ssh -O exit "
+ then you use them as 'ssh-MasterConnection-check host' etc.
Yes, I agree. What is the usage for tmux? Multiple tabs on a quake-style dropdown terminal on a desktop (like guake) does seem much easier than tmux, especially when gnu-screen is always available when you need to run an uninteruptible command.
From what I gather, tmux is useful if you are basically running a persistent remote workspace via terminal on a server and treat your desktop like a thin-client, is that correct?
That multiplex feature is really cool. Is there a way to make it work for pem files? It'll be super useful to specify "-i pem" once and use that for all subsequent ssh/scp. Same goes for the username.
The most gain I've had in the recent years was having vscode remote. Instead of fiddling with terminal editors forever, just have a decent ssh config with all your host, and connect instantly.
Still looking for a way to make sftp/scp work fast.
I've tried that for a while but our servers / VMs are so slow (I/O is a bottleneck I suspect) that that was unworkable.
At the moment I use intellij which has a "sync with remote" function; I can run the things local and push them to the remote testing environment whenever I feel like it.
> Still looking for a way to make sftp/scp work fast.
Making it fast while still using it, not sure. But I can share an alternative, since a friend had the same issue and this was a literally ten times faster for a lot of small files (in the order of 30 minutes instead of 5 hours):
cd path/to/target/location
ssh user@target 'tar c /tmp/example' | tar x
Quick guide to tar, since it's super simple:
c for compress
x for extract
f for file (since we send it to stdout / read from stdin, I don't use the f option)
v for verbosity (not sure if that works on the remote side)
z for zlib compression (ssh can do compression already, so also unused here)
t for testing (reading) an archive without extracting it ("tar tv <your.tar" will show you the contents, it's almost like real TV!)
That's all I've ever needed.
So what this will do is run "tar create <directory>" on the remote system and, tar being a classic tool, it'll just output that binary archive data to stdout since you didn't specify a file ("tar cf your.tar <directory>"). On the receiving side, you pipe it to another tar command that reads from stdin (since, again, no file was specified) and extracts the files from the incoming data stream.
IMO among the most useful ssh config settings is the Include directive, which supports wildcards. Hence, the following is the entirey of my ~/.ssh/config:
Include config.d/*
Include hosts.d/*
config.d is basically one 'Host *' file, but hosts.d lets me keep the random host/device settings for work and personal use separated.
2FA for SSH sounds great but I can only imagine how cumbersome it must be for using on a daily basis. I've recently started permanently locking my SSH ports and only briefly whitelisting them for only my IP with a bash scripts whenever the access is needed https://pawelurbanek.com/ec2-ssh-dynamic-access
Check out our single sign-on for SSH stuff at smallstep (where I work). Either in open source[1] or our product[2]. The hassle of 2FAing all the time is one of the big reasons I love single sign-on for SSH. Basically, you do 2FA when you're pushed through single sign-on. But then you're issued a short-lived certificate that gets put in your ssh-agent. You only need to 2FA to get a certificate. So you can tune 2FA challenge frequency based on certificate lifetime.
It's sort of like a browser session where you "login" and then you can browse until your cookie expires. Here you "login" and you can SSH until your certificate expires. So you have strong authentication, but you're only asked to do it periodically.
Are the numbers for the ServerAliveCountMax and ServerAliveInterval accidentally swapped? Wouldn't it make much more sense to check every second, and fail if five consecutive checks failed, rather than check only every five seconds, and then fail immediately if one check is dropped due to very transient network issues?
At your home install nss-mdns on Linux. It uses avahi for mDNS name resolution on your LAN. You can then forgo /etc/hosts and DHCP reservations between your machines at home. It is compatible with Mac's. I don't know what you can use on Windows for the purpose.
Then to extend this a notch or two install Zerotier on all your hosts. Now you have virtual LAN between all your machines even outside of your home. It is P2P and does hole punching and whatever you need to work. You can connect to your computer via tethering from a puny little laptop while sitting on a bench outside your house. If you are a home admin for your family you can add those computers to your virtual LAN. Or your friends can also join. Then you can easily share photos or whatever straight with Samba or even an intranet of sorts.
Strongly seconded! (Tossing some further dots to connect into the mix: Mosh and tmux and iTerm2's tmux integration fit very well in too. I think Visual Studio's Live Share should also work faster over ZeroTier - it should serve as a direct connection. I'm also really keen to try Emacs multiuser editing on a remote terminal over mosh and tmux.)
And: I don't know if it's the placebo effect, but it seems to me that connections over ZeroTier are noticeably more responsive. Like slightly but noticeably.
Connections on Zerotier itself often survive roaming. I was amazed when I went from tethering outside to home Wi-Fi and ssh session was still responsive. Although it can freeze and definitely can timeout. I'm yet to try mosh.
I did not hear about Live Share, but was planning on using Visual Studio Code Remote Development. For my current work ssh -CX is often enough for me. I also intend to use Xpra, as I found X2Go a bit more rough around the edges.
It could be interesting to measure the effect. Probably they did some testing already, but probably as you said - placebo ;)
ssh not having support for --password argument is a big drawback. Usually requires to use some weird workarounds, especially when copying ssh keys to the machine are not an option.
It would put your password into your `~/.bash_history` and show up on the screen though; using a commandline argument to pass passwords is inherently unsafe.
Sure you could have a secure system that only you use, but the people behind ssh cannot make that assumption. Removing the risk entirely is better than trusting the users.
I'm sure there's a shell trick you could use to pass a password on the prompt anyway. For all other use cases, copy SSH keys securely.
> It would put your password into your `~/.bash_history`
I'm fine typing (or pasting) my password in an interactive prompt when I'm interactively using it; that's not the problem. What I would like a --password option for is when I'm not interactively using it, like from a script. It'll still show up in the process list (ssh could overwrite it but there are some µs where it's there) but my laptop is single-user so that's no big deal.
> I'm sure there's a shell trick you could use to pass a password on the prompt anyway.
There is software that does it, but it's a real pain to find a short command that does it. Simply echo password | ssh user@host does not work, the openssh authors disabled that on purpose.
Ssh keys are, of course, the solution whenever possible, but that's not always possible. I'll be the first to admit that the legitimate uses for --password are rare, but they're definitely there and having to install extra software to make that crap work is just a real pain. I'd rather be able to shoot myself in the foot with unix tools.
It's quite likely that what you want in this case is to use SSH forced commands with sshkey auth and a remote account exclusively dedicated to serving this one request.
But that would be server-side, if I'm reading it right? I'm trying to remember the last time I needed password auth and wanted to do it in an automated fashion (like I said, it is rare), I think it was a router where the filesystem was read-only (only /var and /tmp writable, or something like that, so can't set authorized keys).
That's annoying. Many such cases have an overlay filesystem or other mechanism for preseving specific settings (ssh configs almost always included) or reflashing the image with desired config changes.
Forced commands are implemented on both sides of the session, as the previously linked reference ... doesn't entirely make clear. Locally you need to create, and generally configure, a specific key with the remote user@host Remotely, you associate that key with a specific command, in an authorized_keys file.
It is baffling to me as well. I had to write some automation for network devices that only accepted password authentication over ssh, and ended up just doing it in Go instead of using the OpenSSH binary. It worked extremely well. (I think you can make OpenSSH read the password from another fd, so you don't have to write a client just to supply a password from a script. But ultimately I found parsing the output of SSH to handle errors to be too painful; errors being out-of-band makes everything better.)
putting --password as an argument would make your password viewable with a simple "ps" command, would add your password to your history by default and a host of other issues.
I think sshpass might be a workaround, but sometimes you can just dig a little deeper into your situation and find if you can add authorized keys somehow.
I recall using a router with ssh access, but it didn't run true ssh with a .ssh/authorized_keys, but there turned out to be a way to add keys to the config for automated access.
Is your SSH session hung, and you can't even ctrl-C out? Typing ~. Enter (hit tilde, then period, then enter), will immediately kill the session and drop you back to your local shell.
I am missing certificates on this list, you never have to distribute individual public keys. And you can sign certificates with an API. It is also nice to give someone temporary access, because they are valid in a time interval.
Yea we like certificates at smallstep. We’ve got a couple[1] other[2] posts[3] that cover them pretty well. Should have probably made a more prominent mention though :).
I often ssh then open a vim on the server. But it would make more sense to me if vim had support for open remote files via its own ssh connection instead. Does this exist (for vim or some other editors)? Then I could always use my local config and it would be easier to type on a bad connection.
i don't know about vim but in emacs tramp mode allows this. However, you could just mount the remote filesystem via sshfs which makes it work everywhere (its slow though).
emacs is nice too in that if you are using tramp mode (/ssh:blah) to open a remote file you can create a remote eshell too. you can do diffs and merges between remote machines also. in the pre container/k8s days i worked at a place that did horizontal load balancing between 20ish remote VMs. the ability to have many eshell tiled on screen was so helpful. i also had some elisp that took advantage of tramp mode to very crudely do VM orchestration across the 20. i will shut up now because i don’t want to be that emacs guy.
This reminds me of the ugliest hack I have ever written.
For context: passwords rotate every 90 days, there are different passwords for client facing and "internal" servers, (and different passwords for linux machines vs Windows machines).
All connections to client facing servers (which is my job) must go via:
1) a VPN
2) a "local" jumphost (both ssh/rdp), only accessible via VPN
3) a "remote" jumphost (also, both ssh/rdp), only accessible via the "local" jumphost".
Additionally; The majority of my servers are Windows based.
So, what do you do when everything goes wrong? well, you VPN with your "normal" password, and your 2FA RSA token. Then you rdp (or SSH) to the local jumphost with the same password as the vpn, then you rdp (or, ssh) to the remote jumphost with a different password, then you finally RDP into the machine that is interesting.
So, being the lazy git that I am, I wrote a program that scrapes my passwords from 1password, and ssh's into those jumphosts creating a tunnel all the way through. Then I call freerdp on localhost.
For this to work I had to do a bunch of ugly things like:
1) figure out the dimensions of my display and scale everything, because freerdp doesn't do this automatically.
2) call python from bash because getting a unique random socket requires binding to "port 0" which is not something I think is possible inside of bash.
3) do the same on each hop.
4) determine which password is needed based on the "domain" of the machine
5) detect if the machine is actually accessible or not (IE; are you on the VPN? is the machine actually "local"?)
Anyway, I should share the code, we can all revel in its ugliness.
If you like ssh-import-id to pull keys from GitHub, you’ll love AuthorizedKeysCommand to pull keys from GitHub.
Depending on use case, though, this can be a bit sketch. At smallstep we like SSH certificates, which make life similarly easy on everyone with a bunch of other benefits. You can find a couple relevant posts on our blog if you’re interested.
Incidentally, GitHub now supports SSH certificates (for enterprise edition, at least).
If you are out there and you DO NOT want to lose your session due to a network error screen is for you! Even better, you can have your coworkers join your screen or reattach to it and watch you use ssh.
Can I ask why? I've tried it out (albeit, very briefly), and it just seems like the same thing but with different command shortcuts. Is it just because it's newer?
I've also long used SSH in various simple tunnels for my personal laptop's Web browser and/or mail client, such as through EC2 instances (and at one point also through a filtering HTTP proxy). Here's one version of it.
while true ; do
# TODO: make this do desktop notifications instead of osd_cat
echo "TUNNEL CONNECTING" | osd_cat --pos=middle --align=center --lines=1 \
--font="-unregistered-latin modern sans-bold-r-*-*-140-*-*-*-*-*-*-*" \
--color=green1 --outline=2 --outlinecolour=white --delay=1
ssh -2 -4 -a -k -n -x -y -N -T -D 127.0.0.1:1234 user@example.com
sleep 3
done
Separate from these little personal tunnels, there's some additional SSH timeout options (sorry I don't have handy at the moment) that I've found frequently helpful in my uses of SSH at work, plus an external timeout wrapper that can kill the ssh process, for long-running scripts dealing with a non-OpenSSH server, but they've almost never been necessary in practice for these personal tunnels.
I would like to mention sshuttle if your access only is via a jumphost and you don't want to have to create a port forward for every single host/port you want to connect to on the internal network. It basically acs like a cheap VPN:
It's a poor man's one way VPN: It inherits encryption/integrity/authentication (and some authorization) from ssh; It works incredibly well; For most practical network purposes it puts you on the computer you are sshuttlling to; And all it needs on that computer is the ability to ssh into it and some version of python - no special privileges or prior installations.
The bad: It only does TCP (and does some UDP magic to make DNS work, but not UDP in general). It's only one way (no one on the destination network can "call you back", as you don't have an IP on that network). The only config is which network addresses get routed across the sshuttle (no policy / rules / firewall / anything else). You appear to come from the computer you shuttled to (so, unlike a real VPN, for better or worse - no policy along the way can tell you are coming from outside)
The coolest thing I ever saw in the wild, a guy I once worked with wanted to transfer a directory of files from my machine to his (and maybe show off a little.)
This was long enough ago that I didn't know quite how to proceed (time before rsync, scp...? Nah...) so he asked if he could do it and I let him have the keyboard.
He tar'd (with z) the dir, piped the output of tar to ssh, with a remote command to cat it out there through tar again, all in one CLI line. Blew my mind at the time. UNIX philo FTW.
Note that upstream suggests rsync rather than scp in the general case. Of course pipes work too. I sometimes forget, and the remember half way through a big recursive scp operation. Just use rsync.
This is really no longer relevant because, of course, we all just use rsync ... but in the modern world, my favorite example of the unix philosophy is:
mysqldump -u mysql db | ssh user@rsync.net "dd of=db_dump"
tar -czvf - <sourcedir> | ssh <user>@<remotehost> tar -xzf - -C <remotedir>
This is _much_ faster if you're sending over a directory with a lot of small files, especially if the link has even a modest amount of latency. The 'z' parameter can be omitted if the source files are not compressible (media files or already compressed).
If the files are highly compressible but very large you might consider this instead:
tar -cvf - <sourcedir> | pbzip2 -c | ssh <user>@<remotehost> tar -xf - -C <remotedir>
However if someone said "hacking ssh" without the context of hacker news, I can see why the general computer person would probably think of the newer definition, which implies cracking or seeking security vulnerabilities.
That ensures that you never have the same host listed twice under the bare hostname and the fully-qualified version, avoiding the need to change keys twice when you rotate them.
This setting allows you to automatically accept keys for new hosts but still report conflicts for existing hosts:
StrictHostKeyChecking accept-new
I highly recommend using the control-master feature to keep a persistent connection open to servers you access a lot. This makes new connections and tools like Git, scp, sftp, rsync, etc. much faster:
On MacOS, you can use an x509 certificate on a device like a Yubikey as the SSH key so you can authenticate everywhere with the private key never leaving the token and, should you set it up that way, requiring a tap to use.
At my previous customer, we had to SSH through a bounce gateway (SSH key auth), then a bastion host (LDAP password auth), then the target host (LDAP password auth). Since it was quite annoying, I used multiple ssh_config tricks to make it work without having a 1000 lines SSH config, and I wrote a doc to share best practices. I anonymized it and posted it below.
----------------------
ssh_config_best_practices.md
CanonicalizeHostname yes
##############
### GitHub ###
##############
Host github.com
User jdoe
IdentityFile ~/.ssh/id_rsa_github
##################
### My Company ###
##################
Host myproject-dev-*
ProxyJump bastion-dev
Host myproject-prod-*
ProxyJump bastion-prod
Host bastion-dev
HostName bastion.myproject-dev.mycompany.com
ProxyJump bounce.myproject-dev.mycompany.com
Host bastion-prod
HostName bastion.myproject-prod.mycompany.com
ProxyJump bounce.myproject-prod.mycompany.com
Host *.mycompany.com myproject-dev-* myproject-prod-*
User john_doe
IdentityFile ~/.ssh/id_rsa_mycompany
##############
### Common ###
##############
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h
ControlPersist 2h
# On OS X, UseKeyChain specifies that we should store passphrases in the Keychain.
IgnoreUnknown UseKeychain
UseKeychain yes
AddKeysToAgent yes
- "CanonicalizeHostname" ensures the config is re-parsed after hostname canonicalization. This means that when you SSH into "bastion-dev", SSH re-parses the config using the full hostname "bastion.myproject-dev.mycompany.com", which then correctly matches the entry "Host * .mycompany.com".
- "ProxyJump" was added in OpenSSH 7.2 (2016) and is simpler and more powerful than "ProxyCommand".
- "bastion-xxx" hosts are the only ones whose hostname can be resolved from the bounce gateways. To connect to other hosts, the trick we use in this config is to do two ProxyJumps: your machine --> bounce --> bastion --> target host.
- "ControlMaster" lets you do SSH multiplexing, which in our case is particularly useful when channeling multiple connections through a bastion host. It also persists SSH connections for a while after we disconnect, which speeds up future connections, and avoids typing the password all the time.
- When you ssh into a host, you must enter your LDAP password twice: first for the bastion, then for the target host. If you then ssh into a second host, you must enter your LDAP password only once, since ControlMaster reuses the SSH connection previously established to the bastion. Also, if you close those SSH shells, the connections will persist for two hours (see ControlPersist), so you won't need to type your password for those two hosts if you try to SSH into them again in the next two hours.
- Using this ssh_config, there is no need to add an Host entry for each host. It is not even needed to specify the IP addresses, since they will be resolved using the DNS on the bastion host.
- With this configuration, you can easily copy a file using scp between your local machine and the target host, without needing to first copy it to the bastion, then ssh to the bastion, then copy it to the target host, then remove it from the bastion...
PS: an ssh_config is parsed from top to bottom, so specific comes first, generic comes last. That's why "Host *" must be at the bottom.
- Customise ~/.ssh/config to suite your needs (be careful with storm - manage ssh like a boss, it helps when scripting or searching hosts but has a outstanding bug converting keywords to lowercase [1])
- Use ed25519 key over RSA
- OpenSSH 8.1 added support for FIDO/U2F (use your YubiKey or equivalent)
- Put `IPQoS lowdelay throughput` in your ~/.ssh/config if you run a rolling release (e.g. Arch, Gentoo) or your openssh rolls via homebrew on macOS. latest openssh client with older version of sshd may produce weird disconnection issues (server reset connection, client side is able to connect but terminal hangs in 5~10s). Spend quite some time digging only to find that it was caused by default change for IPQoS (to IPQoS af21 cs1) introduced in OpenSSH 7.8p1 [3]
- leverage ssh-copy-id
- ssh -vvv | ssh -G (troubleshooting from client side)
- use AllowUsers / DenyUsers vs DenyGroups vs AllowGroups , mind the order
- know how to use ssh-add / ssh-keygen / ssh-agent / ssh-keyscan
- audit SSH config (ssh-audit / lynis), version control ssh_config / sshd_config properly if possible
- openssh + tmux ;-)
Personal favourite tips/tricks:
- ssh -D (used to use this dynamic port forwarding, open a local Socks5 proxy to punch hole in firewall, encrypt traffic, it worked for a while against the infamous GFW, only a little while though)
- ssh -L | -R TCP forwarding
- ssh -X | -Y X11 forwarding (run X11 apps remotely and display it on X Server locally)
- More personal SSH tricks put together over the years, surprise to find that my person OpenSSH notes are 150+ pages in Google Docs, sorry can't put all in a comment... [2]
It's packaged for most Linux distributions and offers a seam-less "share your terminal" experience over the web. Has an optional read-only mode. Users can access the terminal via https and ssh. Great for low-bandwidth videoconferencing.
reply