On the one side, this was a realllly stupid mistake that should have been caught earlier, not by some external party who was kind enough to report it to them. I feel like the stakes should be raised a bit for companies who are keeping my data.
On the other side, fear of lawsuits leads toward less disclosure and meaningless PR announcements.
A good deed never goes unpunished. I don't know if I would ever report a security problem like this for fear of needing to deal with this kind of head ache (at least with a non-Google-type company).
Anybody have any idea whether my feelings are being unduly influenced by familiarity with these kinds of stories? I doubt there is any real data to make a decision with, but I like to try to stay at least a little rational.
I once found a state website exposing info they probably (?) shouldn't have been in some JSON fields. The sensitive data was not shown on the rendered page but it was in the browsers memory all nicely formatted. I thought of reporting it, but thought it was more likely they would try to save face by accusing me of being a hacker because I pressed F12 (and that I might have to fight charges in a far away state), rather than accept their mistake.
I chose to stay quiet.
I guess I'm not the only one to have faced this dilemma.
I agree there's tons of threads above us that have cobbled together assumptions to make it feel totally cool and no problem at all for the stores and backend.
I don't agree they know the impact, and it's just plain dumb to announce you're abusing a private API for fun under your real name and employer, it's equivalent to yelling "I abused my unauthorized access to a billionaire's computers!" in front of the billionaire and DoJ lawyers.
Everything will probably be fine but this was _not_ very smart.
On the other hand, many companies act badly without realizing it and genuinely try to better themselves after understanding why what they did was wrong. (This is particularly common with security reports, with a threatening response first then a more reasonable response once someone who knows how to properly handle disclosures talks to management.)
There needs to be a balance - if there are no consequences for getting it wrong initially, many companies will try the bad way first and only backtrack when they get caught, but if we apply the standard that there is no possibility of redemption, a company would have no incentive to improve once they screwed up the first time.
+1. Bad reporting here. This seems to be mostly about consumer disclosure, not that what's happening under-the-hood is different that what your average security-conscious developer might expect after reading that Plaid doesn't sell your data.
That said I think the suit makes a compelling argument that the disclosures should be better.
I'm mostly with you when there's potential damage to their customers, especially when those customers are individuals (like a database of social profiles being compromised), which was not the case here. However I think saying it's "rarely" about the company is a bit naive.
There's been some really high profile breaches where the extent of the impact to the customer has taken an unacceptably long time to be made public... which results in increased fallout damage.
A company's natural inclination is going to be to minimize losses (generalizing of course), and that often means dragging their feet, or dealing with it privately and never notifying their customers.
We should recognize these incentives and be a bit more aggressive about holding their feet to the fire, instead of criticizing the researcher who discovers the security flaw. Unless of course his / her behaviour is blatantly malicious.
It anecdotally feels a bit lopsided in favour of the corporations at the moment. Especially when someone like in this post, who clearly didn't endanger any customers, gets criticized...
In my opinion, they went too far and exposed themselves by telling the company.
In all honesty, nothing good usually comes from that. If you wanted the truth to be exposed, they would have been better off exposing it anonymously to the company and/or public if needed.
It's one thing to happen upon a vulnerability in normal use and report it. It's a different beast to gain access to servers you don't own and start touching things.
It's not about the companies. I do not care much about them.
It's about people that may be hacked between someone's 0day disclosure and manufacturer's response. And if the manufacturer doesn't care to fix the bug - roast them about that. It's their fault.
It's not moral because people (not companies) may suffer. Your actions have consequences.
But the privacy of a security disclosure shouldn't just be based on the company responsible for it. It should also take into account the fact that users of that company's product could be harmed by a public disclosure.
In this case I think the public disclosure would have two effects:
1. Put current users of the product at risk.
2. Prevent people from signing up for the product.
So the question really becomes, does 1 outweigh 2, or vice-versa? (and the answer to that also depends on how cooperative & quick the company will be with a fix)
I don't think quantifiable significant damage should be the bar we use, though that should act to moderate the consequences.
OP admitted to continue changing URLs in order to check out what plans other companies were getting and what they cost. That means OP downloaded lists of employee names, ages, SSNs, and other data. If I were an employee at one of these other companies, I'd be pissed at OP for that. I'd be even more pissed at the people who built the marketplace website for making the rookie security mistake that allowed it, but it's absolutely not ok to download other people's information when you shouldn't have access to it, and use that to your own advantage.
Sure, I don't think this is something that should be prosecuted as a CFAA violation with big fines and jail time. That's not a proportionate response. But I also don't think we should signal that it's ok to look at (and use!) other people's data just because someone else forgot to lock it up properly. I think, for example, something on the level of a parking ticket would be appropriate here.
If OP had changed the URL once, found the vulnerability, and then immediately closed the page and reported the problem, I would see nothing bad in what they did. But they didn't merely do that, and IMO crossed the line in their subsequent actions.
I'd question whether the potential blowback of doing the right thing is actually worth it.
Some organisations will be grateful for your help and you get that warm feeling that comes with knowing you've helped to protect peoples data. But, when it goes wrong and you become the target of the organisation's ire, the personal consequences can be severe.
I’m strongly in the full disclosure in that situation camp. At the very least, if you fully disclose the vulnerability and the company ignores it, it serves as a warning to their customers about what they are at risk of.
How was the disclosure irresponsible? AIUI, multiple attempts were made to report the bug. It went viral a couple of days later on social media. I'm not aware of a link between those two events.
What do you think it is that I'm calling responsible? I'm in favor of the public disclosure for this particular bug, and that seems to be your position too.
I’m not sure those examples should be referred to as “accidents”. If you deliberately give the information to an external company, you leaked it deliberately, even if you thought they wouldn’t do anything bad with it.
This is why we have "Responsible Disclosure". Basically if you make a good faith attempt to tell the company in private, and they do nothing, it is then not wrong for you to publicly release details of the exploit. This tends to get their attention.
Not just reporting it, but having actually exploited it to confirm before reporting it, even if just to test. That was the wrong move.
What should have been done was the second he had the thought that such a vulnerability could exist, he should have notified them that he believes that there is a possibility for one to alter the site code locally to gain unfair pricing, and to ask them if either he could check for them or if they could check using his proposed method.
The second you actually test without permission, you've committed a crime. Jury/court might look at intent later on, but for now, you've committed a crime and are thus subject to arrest.
Bad stuff happens - I think it's a pretty safe assumption. What is worse is that there was some PM out there who didn't want to immediately disable the whole service until the issue was fixed. So I think this disclosure is a fair thing to do - when they choose between PR image or security, they must understand that both can get dragged through the mud.
On the one side, this was a realllly stupid mistake that should have been caught earlier, not by some external party who was kind enough to report it to them. I feel like the stakes should be raised a bit for companies who are keeping my data.
On the other side, fear of lawsuits leads toward less disclosure and meaningless PR announcements.
reply