Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
“Worst cloud vulnerability you can imagine” discovered in Microsoft Azure (arstechnica.com) similar stories update story
9 points by fortran77 | karma 37359 | avg karma 3.81 2021-08-29 09:49:24 | hide | past | favorite | 92 comments



view as:

I always wonder if these types of vulnerabilities affect the most secret DoD contracts, or if those accounts are somehow sharded onto sufficiently separate systems/networks?

DoD only has idiotic requirements like there has to be a fence around their private racks, not useful requirements like the global database isn’t open to the world.

Azure runs their Gov and China clouds as separate infrastructures with separate capabilities.

I'd guess that the azure us gov cloud defaults to stuff not being open to the web, so if you spin up an azure function this is only available internally, but this is a guess

Nope. Look here (https://docs.microsoft.com/en-us/azure/azure-government/comp...) and search for Cosmos and you'll see this service is only approved for DoD Impact Level II systems. That is the lowest impact level the DoD offers and includes only systems exposed to the public, like the websites for recruiting and career descriptions. Any system handling controlled unclassified information or PII would not have been allowed to use this service.

And when you're talking about "most secret" contracts, those are all classified systems, which are on totally separate networks in totally separate private data centers located on military installations. Unless you've figured out how to break strong symmetric encryption using hardware-generated, hardware-loaded, pre-shared keys controlled in military arms rooms, that means you need physical access. It doesn't necessarily mean you need to break into a military installation. You can always try to break into a contractor SCIF instead, but that still isn't all that easy. My wife once saw some AT&T contractors digging too close to the wrong fiber line at her facility when she was working for the Navy at a contractor site and unmarked black SUVs were there to take those guys away to God knows where within two minutes.

That said, I don't doubt people try. When I was at Raytheon working at a secure facility, a Chinese company bought the property across the street, built a hotel at exactly the same height with windows facing us, and it was conspicuously almost always empty. I don't think demand for hotel rooms was financing that place.


> My wife once saw some AT&T contractors digging too close to the wrong fiber line at her facility when she was working for the Navy at a contractor site and unmarked black SUVs were there to take those guys away to God knows where within two minutes.

I used to do design/permitting for fiber networks and going around DOD areas/Fiber was always fun. You wind up having to submit your routes, and get vague feedback as to what you need to move, but never how far away/etc. Usually your best bet is to just go to the other side of the road if possible.


I think the worst vulnerability I can imagine is the USG having unfettered access to the entire db contents without a warrant, which is already the case with everything in Azure.

This bug only seems to widen that vulnerability slightly, to those groups plus those with knowledge of this bug.

Nothing in major US cloud providers can reasonably be expected to remain private, so I think this breathless headline is a little overblown.


So this vulnerability is worse than the worst you could imagine

>the worst vulnerability I can imagine is the USG having unfettered access

You cannot imagine any other party it would be worse to be vulnerable to?


No.

> Nothing in major US cloud providers can reasonably be expected to remain private

I'd expand that to "Nothing in third-party cloud providers can reasonably be expected to remain private". If you don't have ultimate oversight of how the host of your data is managed, anything could be going on there, regardless what nationality of company is managing it or what promises their salespeople make.


Further amended: nothing connected to the internet can reasonably be expected to remain private.

Unless you have some secret kung-fu that makes your on-prem infrastructure hack-proof.

The issue with the cloud is scale.


> The issue with the cloud is scale

I’d argue the issue with cloud is less about scale, more about how easy it is to get started. My mother could quite easily click through the account creation on a cloud provider and setup something insecure, but not a chance is she going to get a DC built with equipment racked or even a basic setup at a colo facility.


Its the secret about "the cloud" that nobody wants to say.

It's bothersome that they keep saying 'primary key' in regards to what seems to be an 'access key' relating to a database.

Disappointing to see that it's arstechnica.


It's the term that MS uses, agree unfortunate name but you can't blame Ars.

https://docs.microsoft.com/en-us/azure/cosmos-db/secure-acce...


You guys open DBs to internet?

If I had to guess I'd think the 30% of customers that Microsoft did notify we're the ones that didn't have access in some way secured via private network or IAM controls.

Sure, some of them. But usually have a pgbouncer in front.


Worst I can imagine would be this, but for active directory or IAM.

Whenever stuff like this happens I see people saying there should be legal consequences for leaking data.

By that logic should there be legal consequences for a company if someone breaks into their office and steals paper records?



> By that logic should there be legal consequences for a company if someone breaks into their office and steals paper records?

Comparing cloud storage and paper records is worse than comparing apples to oranges; it's comparing apples to celery. Sure they're both edible but they are vastly different organisms.

And yes, there can be legal consequences for a company if someone breaks into their office and steals paper records.


An analogy of an analogy. Could somebody go one level deeper?

Comparing the original statement to the subsequent analogy is a bit like comparing an apple and an NFT of an apple...

It depends, in a regulated industry if you fail to do stuff like install a commercial firewall, keep facilities secured with proper door locks, etc. you may very well face fines or loss of a license to operate.

I think when people talk about consequences, they’re not regarding these companies as victims of crimes, rather as negligent actors.


If there is negligence involved, yes - an insecured building being broken into can absolutely have legal consequences.

That computers are involved should not remove negligence as a possibility to recover damages.


You mean like in bailment? That's a thing.

If we're into bad physical analogies, I feel a better comparison would be the company itself sending an indexed copy of the records of all their clients and relying on ethics alone to keep someone from reading others' data.

There already are consequences. Auditors will check the physical security of your office buildings if you're dealing with anything sensitive. If a breach happens later on and it turns out you cut corners on that physical security (or even somehow unwittingly compromised it) then you're not going to have a good time.

This is more akin to leaving the door unlocked than to having a break in.

$40k? Lol. I’m poor and I’d have to think twice about disclosing it for that. How many government lists does having the ability to discover that type of exploit get you on?

I bet Microsoft would claim damages of $1+ billion if someone used that type of exploit maliciously by damaging data and undermining customer confidence in Azure.

What a joke. This should pay $1+ million.


The context you're missing here is the company/research-team that found this are ex-MS employees who started a company (Wiz.io) to help other companies secure their cloud hosting/environments. This is some of the most pure-gold viral content marketing they can dream of, they don't care about the $40k at all, its just to acknowledge this is non-trivial.

That sounds like paying artists with "exposure".

There is kind of a 2-sided argument here:

1. Small time cheap skate business owners sometimes try to cheat professional artists by "paying with exposure", when they have no meaningful audience or influence and therefore no meaningful exposure to give.

2. Sources that do genuinely have very large audiences and influence can infact give an artist so much exposure that it's worth far more than any reasonable direct payment

This situation seems a lot more like 2 than 1. The company is in the business of helping companies secure their cloud environments, and these articles going around the tech press are being read by hundreds of thousands of people who are generally more interested in cloud security than a random person. They could spend many times the amounts discussed here on advertising and still not get their name in front of that many of the right people in a good context.


Big or small company, I doubt they can pay the employees' salary with "exposure". They might have pratically infinite VC money for now, but that's an orthogonal discussion, just as "exposure" and being fairly compensated are.

edit: and what about it had been a smaller company or individual researcher who wouldn't be able to gain as much from this publicity? Are you saying that Microsoft would have awarded them $ 500k? Because that's not the message sent with this reward. And frankly, even this discussion is kind of off-the-point because I doubt that's what they took into account when defining the money quantity.


Uh, yeah, maybe, I'm not sure how much this changes things to be honest. I fully agree with grandparent this is $1MM reward territory. (Edit: sibling put it really elegantly, this is pay-with-exposure justification)

PR campaign or not, you don't spit on those kinds of rewards. And it's a bad look on MS to award 40k on one of the worst vulnerabilities to ever hit a cloud provider...


The context you're missing is it doesn't matter. Next person to discover a similar vulnerability in Azure will have a choice:

1. Disclose to Microsoft for $40k

2. Disclose to an intelligence agency for several times that

3. Disclose to criminals for several times that, in turn

The incentives are now publicly known to be misaligned, and as a potential Azure customer, I have to contend with the simple reality that a significant number of vulnerabilities will be exploited rather than reported.

$40k doesn't even come close to covering engineer time here. This should be a $1M payout.


If companies have to outcompete criminals and intelligence agencies in the open market there will be no bug bounties, we'll just go back to the old way of doing things.

The reality is that if an organization is using a managed database and doesn't have service-provider vulnerabilities as part of their threat model, they are naive and arguably negligent.


Mid-range engineer at big tech makes $350k / year. I think having a vulnerability like this go to Microsoft instead of criminals is easily worth a few engineer-years. A $1M payout is just not unreasonable or even difficult for a $2.5T corporation.

My point isn't that $40K is enough or $1M is too much, it's that pinning public payouts to the grey/black market is unsustainable. This bug could easily be worth $10M in the right hands, so why stop at 7 figures?

You think Microsoft can't pay $10 million? Paying $40k means they don't give a shit about their customers.

The reason you pay so little is to not encourage ppl to even start looking.

Knowledge required to find stuff like that is very scarse. And you have to know where to look for.

Chances to find something really big are so small that its not worth it to look for them finnancially.

Thats why ppl don’t do that often. They do it if they have long cooperation history with given company because they are treated as an employee.

TL;DR is that you don’t want to encourage ppl to start looking. All software has bugs, so its only a matter of time until someone finds something.


Would you pay 100x as much for a service that is protected by that level of award?

1. Work at Microsoft. Make a bug. 2. Tell your "security researcher" buddy. 3. Split $10m in zcash.

Since you are already being a criminal, you could go all the way and sell it to a security agency, Zerodium or another criminal.

The company you work for, once they track the code back to you, will investigate you to make sure you did not make profit on it.


Thats why many doing that kind of stuff probably use a third party that keep money for a definite amount of time.

Well, my experience is that grey/black markets and crime pop up virtually everywhere where people can't afford to make an honest living which covers food and housing.

Once you reach a certain bar, people do the right thing.

$40k isn't enough to cover the cost of research+reporting at livable salaries, let along overhead, fake starts, etc.

1) Microsoft could afford $10M

2) I think people would be honest well short of $10M

3) The cost to customers, if this went public, would be far more than $10M.

You want people to be able to make an honest living on security research, or otherwise, the only people looking for vulnerabilities, aside from the independently wealthy hobbyists, will be crooks.


Options 2 and 3 pose a moral and physical risk. Money isn't the only factor.

if you're only in it for the money i don't think white hat is your calling.

It does matter that they are ex-employees though. At some point you start to risk employees planting bugs themselves stealthily to claim the bounty later.

Because it is unethical and in most countries illegal :)

But you are right. They should pay them more.


I wouldn’t use it maliciously, but I would honestly think twice about disclosing it. I think that’s especially true for anyone that doesn’t have a way to gain from the publicity.

This seems like a strange calculation... you think the scrutiny from being identified as talented enough to find this is bad enough to not be worth $40k + the reputation bump for your CV?

That seems overly paranoid.


What would you use it for instead? 40k + recognition sounds way better than 0 and sitting on it.

What if you sell to the NSA itself?

Still unethical regards the users in risk. And most likely, still illegal since you were hacking before (i do not know the exact legal situation in the US)

Maybe they have already sold the exploit months ago to everybody that would buy it?

The question is if this wouldn't have been reported what are the chances that this would have been exploited. Or what are the chances that Microsoft got saved from a hack in the wild from this knowing this vulnerability. If we assume that the codebase has one vulnerability for 10k lines, we would still get 10s of thousands of vulnerabilities. Any one of those could cost billion dollar to Microsoft, but patching one of those doesn't make the chance of getting hacked much different.

well the black market is always going to pay more. that's kinda why criminals tend to go there...

Yes, the next person will contact someone like https://zerodium.com/program.html instead

I'm curious if Microsoft is suffering from a massive loss of generational expertise. At least right after XP we had to go through a security standdown where all code was reviewed and audited throughout the company. Subsequent features and services had to go through a pretty thorough security review at design time as well.

Over the past few years the number of security fiascos has been increasing. Is the internal Security team (forget what the org was called) dead now ?

I'm sure lack of dedicated testing is also a major factor, but do launches no longer have to satisfy a security review ? Maybe in the name of agile development ?


I'm wondering about this, too. There are so many things being redone from scratch that I'm scratching my head about the why. Maybe Microsoft lost so many engineers from the 90s that they don't have the people anymore that understand the old code.

The problem here is due to a lack of basic security practices. There is nothing related to old code, it is brand new code and infrastructure that was deployed without audit.

Understanding old code rarely gets one promotions.

Somehow, it's gotten me promotions and raises. I feel like I'm lucky and should stick around.

At the time of XP, Microsoft had a fierce monopoly in the world and was absolutely dominating.

I wouldn’t be surprised if they started embracing the “ship fast” mentality with the cloud a bit more over the past years, in order to corner the market more quickly (which they did).

Additionally, I can also imagine that the release processes for cloud are fundamentally different than something like an OS. With the cloud, there’s a much larger mentality of releasing often, and it may be difficult to translate rigorous security audits to this workflow.


I think this is more of an industry problem than a Microsoft problem. This was a feature added onto an existing service. The old waterfall method of security approvals might have caught this, but for most orgs that has gone the way of the dodo (and probably for the better).

Cosmos DB probably went through security review during the design phase and then again regularly as the code was written and improved. The Jupyter notebook functionality was also likely reviewed by security teams during the design, testing, and implementation phases. But once you're through those approvals most security review is going to be done via automated tooling with only occasional re-reviews and penetration testing at scheduled intervals. Automated testing is great at detecting vulnerabilities that have been discovered in the past, but really not good at detecting new classes of vulnerabilities, hard-to-detect authorization vulnerabilities, or how code integrates with other services.

Once the initial approval and code reviews had been done developers would still be committing code to the service and each line of code is probably not receiving a manual code review. Vulnerabilities like this are hard to detect even with a manual review as the testing team may not have great knowledge of all interconnected services, especially if it's an outside vendor.


this sounds reasonable but, in this case stealing a key and using it for a man-in-the-middle attack, is what happened

Ah, really? Where did you find that info?

One thing: Given the rise of MS stock in the recent past and the pandemic, I have seen many long-time MS folks decide to retire, especially in the last 10–12 months.

One thing is for sure, putting its astronomical revenue growth on the side, which presumably involves some fuzzy math... Azure is definitely the worst of the big three cloud providers

AZURE seems to be trying to play catch up -at least in feature parity - with AWS and to some extent GCP. IMO this sort of outcome is inevitable when moving at speed. While they do seem to have some sort of feature parity (on the tin type) and some good ideas, many of the services they have a pretty half baked once you scratch the surface, plus they're expensive.

Is it possible that there's also just more software in different domains being written, hence making it more difficult to keep track of everything that is going on?

Shouldn't be. If I recall correctly, the process was fairly logical. One of the things you wrote down in your design doc was all the external surfaces (endpoints) your application/service had and how you were securing them.

You could literally not ship anything without this review, so they must have abandoned the process by now. Or just never extended it to "the cloud"


After reading into details, this sounds like a fundamental design flow that was made easily exploitable via a new feature

"The 8 most important execs who left Microsoft in 2021, and 3 new power players who joined"

https://www.flexi-news.com/the-8-most-important-execs-who-le...


I am the only one who thinks these bounties are ridiculously low? They should at least add a couple of zeroes to that one. Just wow.

If you add two zeroes you run the risk of rogue employees planting vulns themselves.

What kind of private key is it, and can it be called _private_ key if it exists remotely?

It's private in the tenant. A SSL certificate encrypted web server also has the private key in the remote machine. Potentially hosting millions of other pages.

I don't see the phrase "private key" in this article. The only mention of the word "private" is in a mention of how the problem was reported to Microsoft.

I don't believe private keys were in use here, instead I expect these are secrets and the thing about secrets is, as Bono sang, "A secret is something you tell one other person" which means now there's another party that can lose it.

This is a reason to avoid using shared secret schemes and instead prefer private keys, because if access to Cosmos DB required a private key you as the only one with the key know if you gave that to Jupyter, if you didn't then it can't very well have given copies to anybody else. Unfortunately this is also why systems often don't use such an approach, enabling Jupyter across Azure likely earned somebody a promotion.


From the article:

> A privilege escalation vulnerability allowed anyone with a Cosmos DB account to filch the private key for any other Cosmos DB account, by way of the Jupyter notebook functionality.

Could we not be served the same article?


Aha, it's the blurb for an image in the carousel for the article and not in the body text.

So the good news is you aren't seeing things, that is what the Ars article said. The bad news is that Ars are wrong and these are clearly not private keys but shared secrets.

Other people have linked Microsoft's documentation, the control panel for Cosmos DB lets you ask for new keys and warns you they may take minutes to be ready, so that means it's a shared secret, as a private key would be something you pick and don't reveal to Microsoft.


> it's the blurb for an image in the carousel for the article and not in the body text.

Indeed. Under FF reader mode the carousel is flattened into normal inline illustrations so I thought for a moment they could A/B test the article content.


It's even hard to imagine how this might have happened. Is the service a shared Jupyter server (or group thereof) that somehow has access to everything and it's within this service that the access/authorization is implemented? I wouldn't expect IAM to work this way in a cloud service

I don't know how these security boundaries are usually implemented, but I would expect this bug to be way less plausible. Isn't this an architectural smell?


Microsoft isn't helped by the fact that there are also FB and Google in town, and they pay at least 20% more and have better engineers to work with as well, which makes work far less aggravating. So MS gets FB/Google rejects at this point, and there's a constant brain drain on top of that. But their internal culture has always been, "if we pay managers well enough things will work out". This breaks down when you actually have to do something hard (rather than rearrange buttons on Office app toolbars), as managers aren't the ones who do the actual work.

I remember how one of the local Azure advocates blogged how there are two specialised security teams in Azure: 1. constantly improving security by patching products 2. acting like hacker(s) are already in the system and looking for them

I believe a 3rd group should be founded - our programmers did something stupid, looking for that.


Legal | privacy