Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Most sites could run on a single mid range machine. Top 500 sites could run on a small cluster of less than 10 machines. You have to balance work to external egress to intracluster bandwidth while accounting for your total random IOP budget.

A well tuned monolith is a beautiful thing.



sort by: page size:

I worked one place that did so. Our traffic was sharded over a couple dozen sites, but combined we were at US top-100 scale.

Every site was on a very large bare metal box (sites were grouped together when possible, IIRC only one required it’s own dedicated machine). Each box was a special snowflake.

The DBs were on separate hardware.

When I left they were starting to embrace containerized microservices.


I bet that you can handle a tremendous amount of traffic with a single machine (2 in ha for availability behind an LB).

Especially with today’s hardware or simple cloud vms you’ll come a long way.


I've heard lots of anecdotes from big sites doing fine with a small number of machines.

eg stackoverflow only has 1 active DB server, and 1 backup.

https://stackexchange.com/performance


Two of their servers have 1.5 TB of RAM each. Just one of those nodes is probably as powerful and expensive as 100 nodes in a thousand node setup.

They aren't magically more efficient than other sites. They just chose to scale vertically instead of horizontally.


We take a lot of pride in how much we get out of each piece of hardware. You don't need to have 1000 servers to run a large site, just the mindset of performance first.

Much much larger, albeit distributed across multiple datacenters and completely isolated environments. The site I was giving as an example was going possibly serving around 10-20M/day, and it isn't an isolated case: I know plenty of companies that allocate such resources for something like that.

Like so many people you underestimate the capacity of modern computers. Stack Overflow is famously hosted on like 6 machines. I personally worked on a system that was running on two machines, one a hot backup, that served highly dynamic content to 400k monthly actives.

You're still talking about a minimum of 12-19 servers though. I'd imagine that the average website doesn't use even that. I think that SO in particular is at 3 servers. Maybe if SO explodes, this will become an issue, but an optimization would likely have to make a difference of a few racks before it it's even worth considering making an optimization based soley on power concerns.

We have clients running multisite with 150MM+ page views per month but on dedicated hardware and Varnish running in front of it. The latter is a must in a setup like this. You save on MRC for server rentals, which is a huge value proposition.

Hm. I love Stack Overflow but architecture only starts to become interesting at 4-5 times their traffic.

95M/mo translates to a mere 36/sec avg. A single web- and db-server will handle that without breaking a sweat, although you of course want some more machines for peaks, redundancy and comfort.


So you're running 20 sites? Nice!

If you're already running a dozen sites on one server, collectively they're probably not getting very much traffic. You may very well be able to use a large number of processes just fine.

I feel like there could be incredible value in the one monolith server approach for a lot of businesses. I found I was getting 6-8k requests a second when returning a random number from Go http endpoint and the same with rendering a simple template on a small VPS. I wonder how far you could push it with storing sessions in memory and using sqlite for data and caching while possible. The largest VPS on vultr costs $640 a month with 24CPU and 96GB of ram. the cache locality advantages and goroutine efficiency seem really fun to test

The one time I was involved in something on the top, I remember about 400 concurrent users according to google analytics.

Our cheap server had no problem to serve the static site.


The sweet spot for a server is pretty large: 16-20 cores and 256-384 GB RAM. This could run one MySQL or a lot of Nginxen. I/O is a little harder since you might not want to pay for a million IOPS in every server.

Does a single server support this level of traffic? That is pretty impressive - it would be great to learn about the server's hardware and software configuration.

For most companies performance might as well scale. For example Slashdot uses 4 machines. Now if you are serving up a lot of media that changes things but a single web server can handle a lot of traffic if your app is reasonable.

PS: This is a presentation by Rackspace of course they are going to say write bad software and buy lot's of hardware.


One interesting aspect is that the number of servers is much higher than what would actually be needed to run the site, most servers run at something like 10% CPU or lower. Most of the duplication is for redundancy. As far as I remember they could run SO and the entire network on two web servers and one DB server (and I assume 1 each of the other ones as well).

If someone says SO runs on a couple servers this might be about the number actually necessary to run it with full traffic, not the number of servers they use in production. This is a more useful comparison if the question is only about performance, but not that useful if you're comparing operating the entire thing.


That's pretty cool... I couldn't even imagine running a site that needs that many servers.
next

Legal | privacy