Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> When we learned about this, we immediately investigated and fixed it. At that time, we had no evidence to suggest someone had taken advantage of the vulnerability.

> In July 2022, we learned through a press report that someone had potentially leveraged this and was offering to sell the information they had compiled. After reviewing a sample of the available data for sale, we confirmed that a bad actor had taken advantage of the issue before it was addressed.

Yikes. Sounds like they either didn't dig deep enough to see if it was exploited or they don't keep records long enough to be sure.



sort by: page size:

>didn’t bother to do an investigation into whether it leaked data (which clearly is possible, because you’ve done it now)

It sounds like they confirmed the exploit by looking at the hacked data, not by a renewed search of previously available logs.


> After I shopped a few other companies to see how our plans compared

Yeah once you start using a vulnerability maliciously to obtain confidential data for your own personal gain, even if its a stupid vulnerability, you're not really good-guy security researcher anymore.

If all you did was the bare minimum to demonstrate the vuln exists, that's cool. If after you do that you continue to use it to obtain confidential info for your own gain or curiosity, that's not so cool.

> Perhaps it's more difficult to hold yourself accountable than it is to assume that others who've found your shoddy work are malicious actors.

You literally just admited to being a malicious actor in the paragraph above.


> Worse… access to ALL of this information was given to certain foreign contractors, some of whom were in China.

Pretty sure this is unproven and, regardless, had nothing to do with the hack.


> At that time, we had no evidence to suggest someone had taken advantage of the vulnerability.

They didn't notice that someone managed to scrape 5,485,636 accounts, after they were made aware of the issue?


> I'm hesitant to take their word at face value

Can you document a lie they've told?

> as they've tried to mitigate the entire situation via PR already

Of course, any company would do the same. But did they lie?

> claim many users [who were affected] were not affected.

Can you back up this claim? They claim 147 million potentially affected...and you think that was under representing it?

> I also wanted to add that the software in question that was compromised was a free and open source solution[0], not some top-dollar security program.

The code in question is heavily used in the industry, from the highly respected Apache Foundation. I.E. it wasn't some college project they found on github. Many other companies were affected by this vulnerability...they just either did a better job of patching it or weren't sufficiently interesting targets to make it worth going after. Or didn't report it...

As far as the sophistication, you should read this: https://www.bloomberg.com/news/features/2017-09-29/the-equif...

Assuming you trust Bloomberg, they report that there was an initial intrusion that didn't get anywhere, but appeared to get handed off to a much more sophisticated team that did the real breach.

Re: Frank Abagnale...he's exactly right on one point. The breach was tracked back to an employee who failed to follow the procedures and left a system vulnerable. In my view, the real problem is that their procedures were brittle enough that one person failing to do their part was enough to leave them vulnerable. Not enough redundancy. Even that isn't enough for perfect security, but having multiple people required to certify something like this would greatly reduce the odds of a breach.

However, his claim of negligence is complete bunk and supposition on his part. Equifax followed their procedures...they received the vulnerability report, processed it, applied patches, and ran scans to verify. However, one employee failed to do as required, and the scans were faulty and they didn't know. How is that negligence? It's a broken process, for sure, and they didn't have the ability to detect that it had failed. But negligence would be if they knew about the vulnerability and ignored it, and it's simply not true.


> You also have the problem where politicians/media have no idea what they're talking about, e.g. calling the Google+ issue a "data breach" when it was actually a vulnerability discovered internally with no evidence of anyone having ever used it.

When (1) there's no evidence of anyone ever having exploited the issue; and (2) the logs where that evidence would appear, if it existed, only go back two weeks...

...it seems fine to assume that people have exploited the issue, the evidence was there once, but it isn't now.


> It's bewildering that they knew this huge amount of sensitive data got into the hands of unauthorized third parties and were willing to treat assurances as a sufficient remedy.

What exactly should they have done? It’s data, once someone has it, game over. It’s like trying to unring a bell.


> After multiple attempts to contact the company we finally reached them by phone and they acknowledged the report. After multiple days and multiple reminders by us, they claimed to have fixed all issues. However multiple vulnerabilities we reported still exist...

its a bit unfair to imply they engaged in some kind of irresponsible disclosure, they haven't disclosed any of the exploits.


> but that doesn't mean it wasn't, or even that evidence doesn't exist - right?

That is how it works. You can't prove a negative, so the data is not compromised until the investigation shows that it is.

The developer's credentials was compromised, but those credentials don't give access to the production databases where customer data is stored, then there is a good reason to assume no customer data was breached. Of course you still do due diligence and investigate anyway, just to be 110% sure.


> About the worst that could come of this is an accidental capture in a crash report.

So.. this data is exposed and available even though they said would/could never leak. Seems pretty cut and dry to me. It is a black and white issue. Accidental disclosure is still a disclosure.


>- We haven't used that machine since that exploit was made public.

So what? You were exploited before kayako patched this bug, it was glaringly obvious to anyone who ever looked at the cookies set by your site.

>- We were never exploited.

This simply isn't true, either you're misinformed or lying.

>- The specific machine was a backup helpdesk test server without any real user data.

The specific machine (Which you took down really fast after I pointed it out! :P) I linked probably did not even exist in 2015, I was talking about your prod env.

I don't have a horse in this race, there's no incentive for me to lie about this. I know what you are saying isn't true.


> I don't really like the tone of this article. The linked document doesn't show anything evil; they're not saying "haha look at this terrible secret we're keeping" or "word came from above, let's not fix this problem" or anything like that.

It may not show evil, but it does show negligence. (I'd even argue that it shows gross neglicence.)

> It's just a non-specific bug report that Cambridge Analytica might be doing something sketchy, where nobody really did anything for a few months because they couldn't figure out how to get more details.

When you suspect a client of doing something very sketchy with data entrusted to you, allowing the client to continue accessing the data is extremely questionable. And "not being able to figure out how to get more details" makes it even worse; far worse.


> They knowingly ignored basic security practices to maximize gain.

Completely false. There's been no reputable reporting that substantiates that claim.


> > This process is called the Vulnerabilities Equities Process. Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

> Nothing about this statement makes me believe that they were unaware of Heartbleed, specifically because it seems to imply that they don't stockpile vulns that they find, which we know that they do.

Are you just trying to be obtuse here? The very paragraph you quoted says they are biased towards disclosing, not 100% committed to disclosing. They admit right there that it's possible they would discover a vulnerability and not disclose it.

But the part of the statement you left out is that Heartbleed in particular would only have met their criteria for disclosure due to the great danger to USG systems and systems used by private U.S. persons and entities.

> I suspect that this isn't true, especially if the US government isn't using OpenSSL for their internal security.

The USG uses OpenSSL everywhere. Even USG can't run MS everywhere, and there's not exactly a ton of options for their many Linux, BSD and UNIX-based systems.

Even worse, they likely use OpenSSL in places that no one in particular knows about. It wouldn't surprise me one bit to find out that some of those 300,000 systems still vulnerable belong to government agencies.

> If we're going with anecdotes, I've met a couple of military contractors who claimed to have known of Heartbleed ahead of the public disclosure by non-trivial periods of time.

Non-trivial as in? If they hear about it while Google is developing a fix (and logo) as you seem to be implying, that's preferential disclosure, not NSA holding onto a vuln from the day it came out.


> There is no “direct evidence” that the unidentified hackers are using the data they extracted to target customers

Oh, okay, you haven't found any evidence that customer data you leaked is being used yet. Gotcha. Toooootally makes everything better.


> Company finds a security vulnerability caused by a bug. Logs show that it has never been used by anyone.

That's not true. The logs show that it has never been used by anyone the two weeks they had logs for. It looks like the vulnerability existed for about three years. Given this is Google+ we're talking about, it's entirely believable that someone widely exploited the bug in the past, but stopped because Google+ is dead and no one updates it anymore.

> [Honest question] Should the company announce it publicly?

Yes, and they did. They just waited for six months to do it.


> The consensus seem to be that no one discovered this before now, and no bad guys have been scraping this leak for valuable data (passwords, OAuth tokens, PII, other secrets).

This is literally as bad as it gets, anyone trying to palliate the solution has something to sell you. You'd have to be an idiot to think that $organization (public, private, or shadow) doesn't have automated systems to check for something as stupid simple as this by querying resources at random intervals and searching for artifacts.

Someone found it. Probably more than one someone. Denial won't help.


> 1) This data privacy glitch is just like Facebook’s Cambridge Analytica scandal, except it isn’t.

> well, if its not, then why even bring it up? that part smells like sensationalism to me..

It's the same type of glitch, except there's no evidence that it was exploited (which is a different statement than it wasn't exploited; it may very well have been).


>- There are larger forces at work, and this was a demo for a larger client and is part of a longer play

seems unlikely, considering that the nature of the hack (compromised insider account) would most likely be cut off after detection. That's exactly what happened in this case.

next

Legal | privacy