Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

In addition to the above, it's worth nothing that parallel also supports running the jobs on multiple remote systems via ssh, giving you an easy way to take advantage of a whole cluster.


sort by: page size:

Doesn't fit with my experience. Moving from using ssh -X to just pointing remote X applications at my local X server without tunnelling it over SSH consistently improves performance for me. Not just in terms of experience, but also no process maxes out a CPU, so the machine has spare compute capacity for other tasks. It's really not an issue of ssh not being multithreaded. That shouldn't happen even on a single core. If sshd divided that workload over my n cores, it would still be very bad.

A bit more specifically: IIRC it defaults to trying to use MPI when you launch distributed jobs. It can also fall back to using SSH, for "extremely limited cases" and quite frankly it's not inappropriate to use SSH if you are certain that communication is not going to be a bottleneck.

> but its fundamental value add to any programmer who has to SSH into servers more than once a week is it allows you to split your screen up into multiple independent shells without needing a graphical environment at all.

I actually think that value add for SSH is that people is the ability to allow long-running jobs to survive a disconnection. It can be frustrating doing something that takes multiple hours (like a large wget download) just to have your connection break about ten minutes before it's done. Even if you never use any of the other features you immediately benefit from that.


The overhead of spawning hundreds of SSH processes can be pretty extreme as well.

SSH is a waste of time?

head in hands

For what it's worth, my bosses all the way up to CEO of a 50k person company were all pretty pleased with the working practices I used (medieval things like strace(1), perf(1) and sort(1)) to save thousands of servers' worth of CPU. I'm not advocating for editing code live over SSH but there are still plenty of legitimate reasons to SSH into a server and use UNIX tools.

Since you're clearly trolling I'll leave it here.


Not O.P, but:

- Encourages you to go to instances and check stuff rather than improve monitoring / health checks.

- Can do quick fixes on a few boxes rather than re-running the deploy. Great! But terrible when the person who knows how to do that is away.

- Tailing log files rather than centralised log management for all the things.

- Trying things out / quickly checking something in production rather than being rigorous about keeping test / staging in sync with prod.

The “problem” is ssh is such a great affordance (until you have tons and tons of instances and you can’t do anything by hand anymore) that it means you don’t need to fix internal processes and tools around deployment, configuration and monitoring.

If there’s no workaround you feel the pain and will be forced to set things up right, usually with benefits to security and repeatability.

As is often the case, the best thing about ssh (in terms of managing infra instances) is also the worst thing.

With that said at very small scales it might be overkill to automate all the things so sure, fill in the gaps with ssh and a wiki page.


Burning CPU time and memory bandwidth is what ssh does on purpose to make things secure. So from this perspective, ssh seems like a great match for them making more money.

The article doesn't mention the fun you can have with ssh + say.

My co-workers and I used to ssh into the iMacs of non-technical users in our office and have a good laugh from a nearby room.


Great tidbit. Perhaps it's just not well known.

Edit: Multiple machines if your build is in parallel. Nice.

> Parallelism and SSH Builds

> If your build has parallel steps, we launch more than one VM to perform them. Thus, you’ll see more than one ‘Enable SSH’ and ‘Wait for SSH’ section in the build output.

[1]: https://circleci.com/docs/ssh-build/#parallelism-and-ssh-bui...


SSH's major problem here is performance. When you need to orchestrate dozens of servers with many separate tasks, then the slowdown is very noticeable.

Because most Linux users use ssh for remote work.

And after all that work you can't even use your GUI over SSH.

Why is ssh even needed on distributed clusters? Shouldn't provisioning the cluster be automated and the nodes be immutable by design? I can only imagine what a nightmare a huge fleet of special sniwflake machines woukd be to manage. Cattle not pets

Ugh, I'm a new full stack guy. Could you enlighten me about the evils of SSH?

If you have random people spinning up daemons on your servers then you don't have an SSH problem.

I hope you are joking. SSH clients are pretty much ubiquitous. If anything, this says something against the whole "put everything on the cloud" idea.

Why? Especially if you ssh into a workstation. That seems to support my point...

'Is there a reason for not using sshd? It seems to be a great tool for the problem you're trying to solve.'
next

Legal | privacy