Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

For people reading this, 'Availability Zone' in this context would be (AWS speak for) 'datacentre'. :)


sort by: page size:

Doesn't the same condition apply to AWS, only they're called "availability zones?"

>An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region... AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of each other.

https://aws.amazon.com/about-aws/global-infrastructure/regio...


I purposefully didn't use Amazon's wording because it would be confusing to someone who doesn't know about AWS.

An "availability zone" is an isolated data center. A "region" is a group of availability zones that are geographically isolated but somewhat close to each other.

For instance, three availability zones (data centers) that are within 100 miles (making up a distance) would make up a region.


An availability zone isn't equivalent to a data center, as it might consist of multiple data centers. A better explanation for availability zone would be "a bunch of data centers in close physical proximity, exposed to users as a single logical entity".

Or as AWS explains it [1]:

> An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZ’s give customers the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.

[1]: https://aws.amazon.com/about-aws/global-infrastructure/regio...


> AZs are buildings often times right next to each other on the same street.

Not at AWS: https://aws.amazon.com/about-aws/global-infrastructure/regio...

> An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZs give customers the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. All traffic between AZs is encrypted. The network performance is sufficient to accomplish synchronous replication between AZs. AZs make partitioning applications for high availability easy. If an application is partitioned across AZs, companies are better isolated and protected from issues such as power outages, lightning strikes, tornadoes, earthquakes, and more. AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of each other.

This is unique compared to Microsoft, and Google (a single flood taking out multiple AZ's? Uh oh: https://www.theregister.com/2023/04/26/google_cloud_outage/)

Sure, a massive earthquake or a nuclear strike could probably take out several.


https://docs.aws.amazon.com/sap/latest/general/arch-guide-ar...

Each Availability Zone can be multiple data centers.At full scale, it can contain hundreds of thousands of servers. They are fully isolated partitions of the AWS global infrastructure. With its own powerful infrastructure, an Availability Zone is physically separated from any other zones. There is a distance of several kilometers, although all are within 100 km (60 miles of each other).


The global infrastructure map mentions 2 availability zones: https://aws.amazon.com/about-aws/global-infrastructure/

At the announcement he mentioned that there would be multiple availability zones - which we could treat as yeffectivly different data centers. I thought they were _literally_ different datacenters. What exactly is an availability zone?

"In order to prevent an overloading of a single availability zone when everybody tries to run their instances in us-east-1a, Amazon has added a layer of indirection so that each account’s availability zones can map to different physical data center equivalents."

http://alestic.com/2009/07/ec2-availability-zones


"No one data center serves two availability zones" :

http://www.theregister.co.uk/2015/04/16/aws_data_centre_arch...


> all AZs are within the same physical facility

The TL;DR from AWS documentation [1]:

An Availability Zone is represented by a region code followed by a letter identifier; for example, us-east-1a. To ensure that resources are distributed across the Availability Zones for a region, we independently map Availability Zones to identifiers for each account. For example, your Availability Zone us-east-1a might not be the same location as us-east-1a for another account. There's no way for you to coordinate Availability Zones between accounts.

The long and confusing explanation:

At least not in us-east-1 and us-west-1-2, but I am pretty sure many of the large regions are also run in multiple physical facilities.

The so-called availability zone is an abstract and virtual concept. Let us use us-east-1 as an example.

Assume the following:

* Physical DC buildings: Queens, Brooklyn, Manhattan, Staten Island

* AWS accounts: Joe, Alice, Bob

* AZ: us-east-1a, us-east-1b, us-east-1c, and us-east-1d

Every AWS account in us-east-1 region is assigned three AZs. But for sake of this explanation, we assume only two.

* Joe: 1a, 1b

* Alice: 1a, 1b

* Bob: 1a, 1c

You now ask, "WTF?" but you let this go, think this is done for capacity reason. So do we actually have four different physical facilities, one per each AZ? Nope.

So is 1a and 1b in the same facility? Not necessarily, but very possible.

So 1a and 1b in Queens, 1c in Brooklyn, and 1d in Manhattan? Nope.

So what the fuck is AZ? What is the relationship between AZ and physical facility?

Think about virtual memory address space.

Joe's 1a and Bob's 1a are in Queens, but Alice's 1a is in Manhattan. But Joe's 1a and Bob's 1a are on a different floor, different racks, while Joe's 1b and Bob's 1c are in Brooklyn and on the same floor. This is why certain customers run out m3.xlarge in 1a but others don't in their 1a.

In essence, AZ is a label and is unique per account. AZ is very similar how virtual memory address in OS looks like.

We learned this because our EMR failed due to low capacity in one account.

[1]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-reg...


> various different data centers within a region (IE. Availability zones).

A single availability zone is often (always?) composed of multiple datacenters which are spread relatively far apart from one another.


AWS actually operates 5 independent data centers (called availability zones) within the same complex that make up the us-east-1 region. This makes them significantly more likely to experience an outage in 1 availability zone, but also makes it much easier to architect around the problem.

AWS should call them Unavailability Zones.

This is what I was referring to: An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZs give customers the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. All traffic between AZs is encrypted. The network performance is sufficient to accomplish synchronous replication between AZs. AZs make partitioning applications for high availability easy. If an application is partitioned across AZs, companies are better isolated and protected from issues such as power outages, lightning strikes, tornadoes, earthquakes, and more. AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of each other.

I think you're confusing availability zones with regions in this comment.

AWS AZs don't even have consistent naming across AWS accounts.


An AWS region consists of several availability zones (AZ's) which consist of several data centers running AWS' hardware. Each region is designed in a way that services provided by it can tolerate the loss of a availability zone. A local zone is now something like an additional availability zone with the important difference that it only runs a subset of services of a regular availability zone and doesn't feature its own control plane (which are the services AWS needs to run all this infrastructure including API endpoints, etc.). Instead the control plane of the local zone just runs the so called data plane which is what contains the services used by their customers.

I don't think I explained that very clearly.

What you see as zone A is not what I see as zone A (maybe). When you sign up for an account, AWS assigns your zone A to one of the 3 available zones:

    zone X (real) : your A (virtual) : my C (virtual)
    zone Y (real) : your B (virtual) : my B (virtual)
    zone Z (real) : your C (virtual) : my A (virtual)
That's why people see different outage characteristics between the zones.

Edit: one of the few articles (other than HN comments) that explains it https://alestic.com/2009/07/ec2-availability-zones/


In the article I improperly used "availability zones" to mean spanning both the AWS AZ construct and the AWS region construct. My point was specifically that by building your app to either function in US East, CA, Ireland, Singapore, and Tokyo or to fail over to one of those locations, you can avoid a situation where you've put all your eggs in one basket.
next

Legal | privacy