> Yes, this almost succeeded... but can you imagine how many scenarios where someone such as Andres Freund would have found irregularities, but then.. what? Just had to report it to some webpage's contact page? Without being able to even dig further?
You should check the thread posted yesterday, the Lastpass guy who raised a PR for a go binding for xz but was otherwise unrelated to this fiasco already faced a bit of questioning regarding their motivations from their employer based on a user reporting them from a contact form.
Moreover, many companies already have information from background checks, and in certain countries, they also have the tax identification number of the employee which can pretty much identify who put in the backdoor.
> lets an NSA/software company "double-employee" add it without the company knowing.
I always wondered how that works. I am a full time employee at software company. Cannot imagine having extra time to report to another employer (NSA) and deal with their red tape and crap as well.
Or does NSA show up at their doorstep with a bag full of cash - "Here you go, have this, and install a backdoor in your company's software. And we never met <wink>, <wink>"
That sounds good on paper so to speak, I just have a hard time imagining a realistic scenario.
Now finding 0-days and hoarding them, I can see that.
> Sadly, this isn’t anywhere near true because most employers who contribute data to The Work Number — including Fortune 100 firms, government agencies and universities — rely on horribly weak authentication for access to the information.
> The original PINs were sent in the mail and in order to retrieve it you had to fill in the AGI from the previous year's return, not those credit-bureau challenge/response questions. It seems they created some vulnerability there, but it wasn't always like that.
The same information many people hand out as part of loan applications when they are asked for their tax returns as well as the IRS and other places. Hell, I had to do it in March and my AGI for 2014 is in god knows how many people's hands.
Once again, that is a username and not a password.
1) Passwords are secret credentials possessed by one trusted party, yourself.
2) The hashed representation of which is stored by the opposing party for verification.
You keep listing publicly available authentication factors which are effectively usernames, just like a SSN.
If you need me to clarify further, I can. I'm just genuinely horrified people treat these things as confidential because they are not.
People, even IT professionals, seem genuinely ignorant of the fact this data is essentially public knowledge and it simply requires a small amount of [potentially illegal] effort to acquire. They then go and build authentication schemes based on this information on the assumption literally no one on the planet is a criminal.
That isn't "security" for anything that involving real money.
> How does a Big Tech company like Uber mishandle private data like email addresses? Presumably this action was not coordinated by the C-suite. Did some mid-level/low-level employee with access to the data actually steal and sell the addresses?
This is bizarre. You're giving them way too much credit. They're villains. Of course they're going to sell your data. They'd spit in your mother's face for a nickel.
It is valid and I’m sure quite effective but it isn’t perfect. You need to provide escape hatches for real people in all anti-fraud systems you create. Even if those escape hatches sometimes let fraudsters through as well.
I’m not Google and I don’t know the kinds of fraud they attract (note: probably all kinds of fraud imaginable) nor do I know the level of effort those fraudsters are willing to put in (note: probably unimaginable amounts of effort)… but I do know that all anti-fraud work needs to allow ways for real people to escape. Your job in this space is to protect real users… and sometimes those real users inadvertently behave like fraudsters and get flagged. There has to be ways out.
> They better be tracking logins or how do they investigate fraud? If they don't have a database of every login tied to an IP, I would assert they're negligent with regard to blocking fraud.
Look there are limits to what I'm able to talk about, but consider that a fraud subsystem functioning in transaction-time is running much more slowly than something running in request time.
> A lot of the measures taken to prevent fraud also block proxies and the like.
When was the last time you couldn't use your banking website over a proxy? Only a few mobile apps actually make the decision to do this (mine did) and it's not popular.
> I don't know how many of them had mandatory 2FA then, but I think many more have now.
Which national bank has mandatory 2fa? Not that this would matter. We dealt with 2FA at level via Intuit, that wasn't great at it. Plaid got incredibly good at it. It became pedestrian when we experimented with Plaid.
> I have no idea how something like this can even happen. In a company of that size it should be actually impossible for a transaction like this to occur without clearly documented processes to ingest, review, authorise and pay transactions.
After having worked IT for various startups I cannot understate just how much executives and other higher ups detest policies that make them verify who they are. It short circuits something with their ego.
> Are we assuming 1Password is lying about anonymisation?
I wouldn't put it that way. Rather, I'd say that you shouldn't assume something is true just because a company claims it is. Especially when that thing can have a material effect on their profit margin.
>Heck with the entire world being able to audit and review the source code
That's irrelevant when we are talking about a company being paid specifically to audit something. The entire world is able to send me food as well, but I don't get mad when it doesn't except for when I pay someone to do it.
>I simply asked a commentor to show the work they've done
And it was a dumb question. An auditing company that failed to detect massive fraud either willfully ignored it to sellout or was too incompetent to recognize it.
> Did anyone honestly think it was anything other than humans manually entering data from the submitted images?
They can have humans entering data manually but have safeguards in place to protect data - usually done by either hiring people in-house or having a contract with a firm that employs a regular set of people to do this work.
Sending the data to Mechanical Turk implies basically none of that safeguarding. Yeah, it's possible to do it, but from the sounds of it - especially given Expensify's appalling response - it's pretty apparent that they didn't.
> The piece of the puzzle that you're missing is that people use accounts for fraud, and if they find out why an account got banned, they'll know which behavior triggered it and will learn to avoid that behavior in the rest of their accounts and future accounts.
Compliance is the goal, no? How's it self-defeating if you ban non-compliant behavior and only way around it is to basically get more compliant with the rules?
How will you get compliance if you don't share details with clueless violators, so they can avoid the mistakes in the future? That seems self-defeating, because this gets rid of clueless, but potentially valuable developers.
>I had the same with Verizon after someone opened a wireless account in my name. After supplying all the documentation they asked for, they came back to me, "our investigation believes the account was not opened fraudulently"
Last summer I get a teams message from my manager "Call me ASAP" uhhh, crap, what did I do wrong.
HR had received an attempt to verify my unemployment claim. Uh, er, what? Apparently, like millions of other Americans this past summer, I was one of the people that someone tried to fraudulently collect unemployment benefits on.
I contacted the unemployment office her and reported it, a day or so later got a form email back stating this was happening like crazy and I needed to take no further action.
>our investigation believes the account was not opened fraudulently" (i.e. that they were saying that the account, and credit tradeline, were in fact mine).
This is an ongoing fear of mine. That come tax time they're going to be like "whoa, you owe all these taxes on the thousands of unemployment income you were paid" and I'll be like "uhhhhhhhh, no?"
> Good luck getting this adopted. A couple of months ago I was trying to responsibly disclose the complete exposure of every customer's name, email address, phone number and the last four digits of their credit card to a public QSR company that allows online orders. It was straightforward enough that I found it passively while trying to login.
If I may give a hint: Sometimes a good way to handle such issues is going through the media.
(I've handled such things in the past, you can mail me if you want, but I don't want to see this as self-advertisement. I guess there are plenty of other Journalists covering IT security who are willing to handle such issues as well.)
> the amount of work it took to get them to pay attention when only having the front door available was staggering.
I've seen this across most companies I've tried reporting stuff to, two examples.
Sniffies (NSFW - gay hookup site) was at one point blasting their internal models out over a websocket, this included IP, private photos, salt + password [not plaintext], reports (who reported you, their message, etc), internal data such as your ISP and push notification certs for sending browser notifications. First line support dismissed it. Emails to higher ups got it taken care of in < 24 hours.
Funimation back in ~2019(?) was using Demandware for their shop, and left the API for it basically wide open, allowing you to query orders (with no info required) getting Last 4 of CC, address, email, etc for every order. Again frontline support dismissed it. This one took messaging the CTO over linkedin to get it resolved < a week (thanksgiving week at that).
> But a detailed explanation of why we thought something was fraudulent could (and sometimes would!) just lead to another fun reddit post where someone describes how to hide the fraud a little better.
If I were in charge, it'd be too bad for your company, and they'd have to give a detailed explanation every time even if that would be the result, because the alternative is way worse.
You should check the thread posted yesterday, the Lastpass guy who raised a PR for a go binding for xz but was otherwise unrelated to this fiasco already faced a bit of questioning regarding their motivations from their employer based on a user reporting them from a contact form.
Moreover, many companies already have information from background checks, and in certain countries, they also have the tax identification number of the employee which can pretty much identify who put in the backdoor.
reply