Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think one reason this might be the case is because modern cloud-computing makes scaling a single centralized service super easy and straightforward.

In the early 2000s if you wanted to build something "at scale", the only real way forward (besides raising tons of money upfront) was to build a decentralized protocol. Offload most of the compute, bandwidth and storage to the end nodes.

Nowadays, you can just an AWS auto-scaling group behind a load-balancer and a CDN, and you can pretty much handle infinite traffic.



sort by: page size:

Scaling out isn't just about performance. It's also about high availability.

Which is something everyone needs to consider when using cloud infrastructure.

AWS provides 99.5% EC2 uptime guarentees which is ~2 days a year of outages.

That is not simply acceptable for most use cases and is why a single server just won't cut it.


Indeed, AWS is still king. I doubt GCP or Azure would have anywhere near the capacity to handle all of the AWS customers. You're talking orders of magnitude, maybe 1000x? more compute power required - it's a different scale.

I don’t think any managed AWS service scales well for any bigger enterprise which have to handle a decent amount of traffic. Once you reach certain threshold you soon hit some throttling and limits BS and AWS solution is just to throw more money at them.

I think AWS does pretty well with that philosophy at hyper scale.

AWS has millions of servers in a single AZ.


I don't know a lot about the huge scale side, but I tried running a teeny little server of my own on AWS, and it's way overpriced and over-complicated. You can get a single server for like 20% of the price on any number of hosting services. Digital Ocean works pretty well for me right now.

Yeah, because AWS = instant scaling.

This sort of operation needs physical hardware in their own secure location. This isn't some mobile social webapp.


Exactly plus at scale AWS doesn't do everything either, a sysadmin or at least consultants are still needed to navigate the gotchas and help with how to better provision the stack. AWS instances don't just scale themselves.

There is one specific case when I recommend AWS instead of dedicated servers and it's for customers who have widely varying traffic with predictable peaks. In that case having the flexibility afforded by cloud providers to increase the number of instances temporarily to deal with the peak makes sense.


That kind of service is easy to scale by putting different devices (eg. the echo dot that is next to me now) in different shards.

Also both Google and Amazon have elastic capabilities, they can move machines from one task to another, today is the slowest day of the year for internet traffic so probably AWS customers are using very little, also search and e-commerce traffic is light so capacity is not a problem.


They have a completely different use case that benefits from being able to scale up and down quickly. I don't see them moving away from AWS anytime soon.

That doesn’t make sense a lot of what AWS provides is services such as cloud front at scale. You’d still need to test at scale so infrastructure and experience at scale is part of what makes AWS successful and profitable

I am not a proponent of scaling out horizontally as the best solution to all things, but being able to do so comes as a result of design that helps to minimize coupling of services and single points of failure, in general, which are other benefits.

But I also tend to think "you can always get a bigger box" can take you pretty far, and much further than most people that only think in terms of AWS realize.


Cloud platforms like AWS offer other things besides just scaling. Like Hadoop processing, hosted databases, email infrastructure, DNS services and the list goes on and on.

I always thought that "cloud" was overhyped and never understood why you would want to use stuff like EC2 over a dedicated server somewhere.

Honestly, the idea that the new services have a "higher level of abstraction" and that old services are an insufficient extra level of abstraction is something I could never articulate properly.

Thanks.


I completely agree with you. I really suspect some people have never seen themselves how insanely, mind-bogglingly powerful AWS is.

Try doing this [0] with your dedicated servers in under 10 minutes.

And then try attaching highly-available scripts/cloud functions and dozens of different integrated functionality in seconds.

And then try setting up good Network ACL, firewalls, route tables, NAT gateways, load balancers with a few clicks.

... and people here are seriously suggesting that instead using AWS would equal a massive forced scaling operation. Lol.

[0]: https://i.ibb.co/TKmB9HX/image.png


Huh? Shared server infrastructure? That's really what this sounds like. Welcome to web hosting in 1999 guys. Most of the point of AWS was that you have your own dedicated resources. Sure, this is a scaling solution, but revolutionary?

In my experience, its actually at real scale (tens of millions of concurrents) that you want to move off of AWS, and at smaller scales where it makes the most sense.

Huh? In general AWS is chosen for 'scalability' concerns given it's auto-scaling functionality, not Linode/Rackspace/etc.

If anything, I see a lot of people choose AWS for 'scalability' concerns when they never end up needing to scale.


No, for the same reason it still doesn't make sense to go with AWS. Basically you need a large number of cores that are intimately integrated amongst a large quantity of storage. It was the fact that all the machines could have a bunch of drives on them for "free" (low marginal cost). By interspersing the data and the compute you get maximum bandwidth to local assets.

Sure, and AWS is great but it's also not magic. It's unlikely you can literally just take the code that was running on a micro instance and scale it up to a big site in multiple zones without a lot of work.
next

Legal | privacy