Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Okay so, there are lots of ways to do this kind of stuff. A threadpool is a pretty classic one. Apache being the poster child here in an HTTP server context.

> Thought there was a limit for work that can be done concurrently by the CPU,

Right. But in an IO bound scenario, the CPU isn't doing work; it's waiting on IO. So, because threads are generally heavy, you don't want a ton of them, taking up memory, doing nothing.

But, when you have lightweight threads, you can spin up one per connection. This ends up being simpler, and you don't have the large memory usage. This is what nginx does, in a sense. It still has a worker per core, but each of those workers can handle thousands of requests simultaneously, because it's all non-blocking.

That limit to concurrent work is exactly why non-blocking architectures are so important, and task systems fit into them really nicely.



view as:

Excellently said, Steve. This is a great thing to know in this context, “Latency numbers ever programmer should know”: https://gist.github.com/jboner/2841832

Legal | privacy