Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Reddit Security Incident (www.reddit.com) similar stories update story
308.0 points by pyreal | karma 269 | avg karma 7.47 2018-08-01 16:52:02+00:00 | hide | past | favorite | 206 comments



view as:

SMS Interception is what got them. Moving to offline 2FA needs to happen. SMS Interception is on the rise.

And yet bizarrely, the number of organisations with serious security requirements that are adopting SMS messages or other methods dependent on phone numbers as 2FA just recently is quite noticeable.

Stripe use it for logging into your business's account.

HMRC (the UK government tax office) also uses it for logging in.

Various banks and financial services I use in a personal capacity rely on secondary phone authentication to set up things like new recipients for paying bills online.


Interesting that the data accessed was very specifically only limited to:

* A complete copy of an old database backup containing very early Reddit user data -- from the site’s launch in 2005 through May 2007

* Logs containing the email digests we sent between June 3 and June 17, 2018

Also of note:

"Already having our primary access points for code and infrastructure behind strong authentication requiring two factor authentication (2FA), we learned that SMS-based authentication is not nearly as secure as we would hope, and the main attack was via SMS intercept."

If this doesn't put the nail in the coffin of SMS-based 2FA, I'm not sure what will.


I thought I had heard that Reddit was using a non-salted or non-hashed password in the early days?

They probably deleted those backups when they realized how that was bad.

>"Can you txt me the old mysql root pw to the reddit DB? Need to check something, thanks - Alexis."

That was about 6 months before the stolen backup (December 2006): https://www.reddit.com/r/reddit.com/comments/usqe/reddits_st...

If that's what Reddit's security team was thinking by mid-2018, then Reddit security is in bad shape. NIST recommended the deprecation of SMS 2FA two years ago.

When NIST recommends the deprecation of a protocol you know you should've already gotten rid of it five years earlier, not keep it around for another five years.

The sooner more companies start supporting U2F and WebAuthn, the sooner more people will start buying and using hardware security keys.


I don't think it was up to Reddit. Some providers offer no way to do MFA without mandatory SMS involvement in some way -- either primary delivery is by SMS or you can "reset" / get back up codes via SMS.

Looking at you, LinkedIn.

Microsoft has been lax on this.


Many major banks do that too.

I've been interested in enabling MFA on my google account, but with no obvious way to bypass/disable SMS, I don't even bother.


Because of cases like this, it's also the admin's responsibility to have their wireless provider put a security pin and a Do Not Port order on their account. Sometimes reps will ignore this and port people's accounts anyway, but it's still negligent not take this precaution.

Phone company reps are notorious for ignoring any and all such notes on accounts, we can't know for sure if Reddit had this in place but it probably wouldn't be much of a hurdle

It does sound like they used TOTP when possible but in some instances only SMS was available. My issue is that they seem to act surprised that SMS is so broken. I have trouble believing that any admins of a site that large have missed the various security alerts and news articles about hacked accounts, lost cryptocurrency, etc.

You don't even need those. TOTP is perfectly acceptable. It's simple, it's offline, it's free. It's a weekend project. Two functions and an extra column in a database. No hacking on clients, no implementing protocols, no buying hardware.

Implementing TOTP: https://www.serverstack.com/blog/2013/02/21/implementing-tot...

My opinion is hardware tokens are a joke for average user security. If I can't even keep my phone around to use TOTP, I'm sure as hell not going to keep track of my U2F token.


U2F is superior to TOTP because it's much harder to phish, and phishing is one of the two most serious attack vectors most teams face (the other being email attachments). Everyone that does U2F seriously also does TOTP, and doing both is the best option.

But yes, TOTP is a good start; the minimum that every serious team should have for its own personnel now.


U2F is pretty cool, but it's also a bit over-complicated. Specifically, I don't think we need USB or NFC.

A client (or an extension) could generate a QR code which embeds a signed request from the server (SRS) along with a signed request from the client using a key it generates just for this server (CRS). The user scans this QR, the user's phone app recognizes the SRS+CRS, and spits out a token or QR.

The phisher might be able to mitm the SRS, but they can't know what the CRS is, so they can't generate a fake QR that will validate on the client app. So with no special hardware, we generate a one-time token that [as long as they can't mess with browser window pop-ups or some other nonsense] can't be phished. (I'm excluding malware because we all know it's game over by that point)


Or we can just implement U2F, which was designed specifically based on experiences people had with TOTP phishing. There are software U2F tokens if you nerd-fussy about this stuff.

> If I can't even keep my phone around to use TOTP, I'm sure as hell not going to keep track of my U2F token.

You effectively can't implement U2F without TOTP, because otherwise when people lose their U2F dongle they'll be locked out of the site. For any given site you should turn on TOTP before activating U2F, but then store the TOTP secret somewhere secure and don't actually use it.


That is not what data it was limited to at all. That is the data they are highlighting because it affects their users the most.

Here is the list of what of what they are saying they got access to: https://news.ycombinator.com/item?id=17665254


we learned that SMS-based authentication is not nearly as secure as we would hope

Yeesh, what is going on with their security team over there? Years ago it was "oh, now we realize why everyone was saying that storing passwords in plaintext is a bad idea." Now it's "oh, now we realize what a bad idea SMS auth is."

So what is Reddit doing today that you and I would think, "of course, no one does that anymore 'cuz 'duh'." but they think, "nah, it's still okay."?


It sounds like Reddit just recently hired security. It's not at all unusual for companies of Reddit's size not to have a security team; the only thing that makes them noteworthy in that regard is how old the company is. If your expectation is that most of the companies you've worked with or availed yourself of have dedicated security teams, revise your expectations sharply downwards.

It's not at all unusual for companies of Reddit's size not to have a security team;

Well, by "security team" I meant "people that should know better". I mean, raise your hand if you knew even ten years ago what a bad idea it was to store passwords in plaintext. Okay, now keep your hand up if you're job role includes the words "security team". Hmm, not a lot of hands left. You don't need a dedicated security team to figure out that some folks, including the CEO, shouldn't be let anywhere near the database nor making design decisions.


Reddit had a security team, they just haven't had the position "Head of Security" until now[0]. They did have a policy of using TOTP but couldn't enforce it on certain providers (applications?) [1].

[0]: https://old.reddit.com/r/announcements/comments/93qnm5/we_ha...

[1]: https://old.reddit.com/r/announcements/comments/93qnm5/we_ha...


Question. I am curious in general in that, how can they say that "this" data was compromised, but "that" data was not?

How does that work? Do they really have some low-level access log that shows who accessed what file and at what time?

And then, do they keep that log for some months at least?

And, can they query that log and declare that, six months ago, "this" data was compromised, but "that" data was not?

How does all that work.


They probably knew what systems were accessed via centralized logging. By extension, you know what that system had access to.

Has anyone leaked this? I'm more interested in unlocking my eponymous account I created 13 years ago then promptly lost access to without an email attached :(

How would that help you? Brute force a bunch of your old passwords?

There are a couple of comments on the linked thread suggesting that someone may be using it (scam emails containing people's old reddit passwords).

Can you or someone else explain/link to me the basics of why SMS-based 2FA is so terrible? I've never really heard the sentiment before, but it appears to be common knowledge.

TOTP clients cannot be intercepted where as sms tokens can be compromised in a variety of ways.

TOTP tokens can absolutely be intercepted. A MITM attack can work like this:

1) User inputs username and pw into spurious site.

2) Spurious site prompts for the user's TOTP token.

3) Spurious site proceeds to immediately log in to the real site w/ username, pw, and valid TOTP token.

4) Bad guys get an HTTP session cookie which for many sites lasts practically indefinitely.


Telephone companies are insecure and susceptible to social engineering, and their backup authentication schemes often rely on publicly accessible information. This enables attackers to hijack the account and reassign the phone number to a device they control.


Its too easy to hijack someone's phone number by socially engineering a phone company (ignoring routing/signaling vulnerabilities). A quick search brought up this article:

https://www.theregister.co.uk/2016/12/06/2fa_missed_warning/


Because the telephone companies are terrible about security and often highly disorganized internally. They are beyond stupidly susceptible to Social Engineering and any "passcodes" against giving away access do not stand in the face of stupid customers and the need for customer service to satisfy them.

Your number can easily be stolen or redirected to get and sometimes send SMS from/to your number. Your cell phone account is the linchpin for a very extensive identity theft attack.


In their defense, being able to successfully identify a customer is who they say they are is a difficult problem that is only compounded when you might only speak to a customer as infrequently as every few years. 2F devices and codes can be lost. Passwords and pins can be forgotten. Answers to security questions can change. Have you ever tried to access your own account with a company like this without this data? There are few things more frustrating than being locked out of your account because you can't recall what you said your favorite movie was in 2012. Throw in the low odds of actually being targeted in a social engineering attack and companies optimize for customer satisfaction and convenience over security.

Blaming companies for responding to that incentives isn't going to accomplish anything. The way to fix things is to change the incentives by either increasing the punishment for falling for social engineering or create a system that makes it easier to remotely identify people.


This seems to indicate a level of sophistication behind traditional hacking skills. How did they get the phone number to know which carrier to contact to socially engineer?

Also, I am not sure I understand:

> we suspect weaknesses inherent to SMS-based 2FA to be the root cause of this incident

It seems that optaining employee login credentials was the root cause, and bypassing 2FA was the second hurdle but not the root cause.


You don't need to know the carrier, just the number. Talk to a carrier in the country and ask to port "your" number to a new plan. Most salespeople would have no problem ignoring security for a sale.

1. It's too easy to get a duplicate sim card

2. MITM for SMS is not hard if you can get close and requires <$500 in hardware


Cellphone accounts can be readily compromised via social engineering (aka tricking the CSR into changing things).

Here's a pretty hilarious and effective example where a crying baby background was used: https://www.youtube.com/watch?v=lc7scxvKQOo


Here is a good example of an attack against a system secured by SMS based 2FA: https://medium.com/@CodyBrown/how-to-lose-8k-worth-of-bitcoi...

SS7 attacks.

In May 2017, O2 Telefónica, a German mobile service provider, confirmed that cybercriminals had exploited SS7 vulnerabilities to bypass two-factor authentication (2FA) to make unauthorized withdrawals from users' bank accounts. The criminals first installed malware on people's computers, allowing them to steal online banking users' account credentials and phone numbers. Then the attackers purchased access to a fake telecom provider and set up redirects from the victims' phone numbers to lines controlled by them.

https://en.wikipedia.org/wiki/Signalling_System_No._7


SMS is not about securing an account. It's only use is as a proof of work (money) to make it harder/more expensive to make a bot account.

Using it as a security measure is a mistake.


Edit: nevermind

They didn't say write-only access.

They said they only got R access instead of RW access.


The hacker(s) took a database backup from 2007. I have never worked anywhere that has kept a backup that long. It is possible it is some sort of final archive before a large migration, redesign, or something like that. However if the intent is to keep it forever it should at least be encrypted. As far as I'm aware, the only strong reason to not enable encryption on backups is to allow a secondary backup or mirroring system to compare the changes between backup files rather than reprocessing the entire thing as a single new file. That reason disappears for an archived backup.

A lot of things now considered security best practices were not in wide use back in 2007, to put it mildly.

Given the weird collection of stuff they got (including the ancient database backup) I wouldn’t be surprised if this was the contents of an admin’s home directory.

+1 insightful.

While looking at GDPR compliance, I came across a guide that said "backups are kept for as long as it will take you to notice the missing data and restore it. Exported data kept for longer than this is an archive".

That helped me realise I really shouldn't be keeping 5-year-old database backups for some systems; a few months is plenty sufficient time for us to notice any corruption. As part of that clear-out, I searched for and deleted many old mysql-backup-2012-just-in-case.tar.gz from /root and similar places.


Speaking of GDPR compliance, and as some in the Reddit thread have pointed out, the GDPR requires disclosure of serious data breaches to the affected users without "undue delay", and to the relevant supervisory authority within 72 hours:

https://ico.org.uk/for-organisations/guide-to-the-general-da...

It will be interesting to see what consequences, if any, Reddit end up facing over this.


Do you have the link to that guide by any chance? I haven't found a guide that's practical enough for my needs yet.

I think it was this, and the source it links to.

https://community.jisc.ac.uk/blogs/regulatory-developments/a...


How would an attacker go about intercepting an SMS?

Essentially, convincing a mobile operator to transfer someone’s phone account to a SIM card an attacker controls.

The SMS interception via social engineering of telecom support staff, as others have pointed out, seems most likely, but consider another approach: an app on the users phone with message read permissions. Most people are not diligent enough to perform an audit of the permissions requested by every app they install and I could also believe a determined attacker might install an app on an unattended and unlocked phone given the opportunity.

Of course, this only works on Android, and the user has to have given explicit permission for this.

Most of these so called Social Engineering of your Sim happens in US. In most other places, your are required some form of proof before you can get or alter any of your personal information as well as Sim card.

Take a look at this post for an example attack

https://theantisocialengineer.com/2018/07/23/sim-swap-fraud-...


People are saying social engineering the legitimate operator, but my off-net SMS provider doesn't require any validation from the original operator. I've successfully "stolen" the SMSs from my cellphone with no validation that I was authorized to do that, never heard about it from my carrier (T-Mobile).

Depends on the attacker and target. Many of the cell towers are insecure. Even today the SS7 attack works on many of them, and phones continue to blindly trust insecure cell towers. For a tech-central place like SV, you get a pretty good return on some risky cell tower setups. Unfortunately most developers don't utilize multiple phone numbers, so a mapping between email and phone number is frequently in some semi-public database.

Of course if you have a 0day RCE its possible to get the SMS as well. Even local malware on the computer that you're entering the code into could work if you're an identified target. Many protocol downgrade attacks are possible too, though I'd wager most developers would notice the lack of HTTPS in the browser bar.

And of course social engineering the cell phone company. Though if you call you can put a flag on your account to make it harder to transfer numbers.


I once walked into a T-mobile store, showed them my phone and claimed that the simcard is stuck and asked them to transfer it to a new simcard I brought with me. They asked for my phone number, scanned the barcode on the new simcard, done. I didn't have to provide any identity. I could have been anybody and the only trace would be the security camera in the store.

This is definitely more an edge case (and not really an intercept), but if the user has an iPhone with SMS forwarding set up (via iMessages), the "intercept" could occur by accessing the users iMessage account and waiting for the forwarded SMS to arrive.

By taking control of your phone number or the radio network your phone connects to, or attacking the signaling network itself, to intercept information going to your phone number.

Basically, imagine every conceivable way any human or computer might at any point interact with a plaintext signaling packet designed to be passed around the world by different companies and eventually read by people. Now attack all of them. Something somewhere will give it up.


And for all that, you get an 11 year old partial dump?

The ROI doesn't seem that high.


Never underestimate the determination of bored Reddit trolls.

(Actually, the attackers were likely cybercriminals looking for the whole database of current users. Even with salted hashed passwords, it's trivial to find commonly used passwords and reuse the e-mail address and password to attack other accounts, such as bank accounts, paypal, amazon, facebook, gmail, etc. Each pilfered account adds up to a payday when you sell them on the black market, for things such as money laundering, account draining, and spam)


Modified “stingray” like systems can do this transparently too. Scary but exciting tech.

It's fairly easy to claim the general case, and indeed you're right. But the challenge is that not all attackers have infinite resources, and the ones that effectively do us small fry really can't protect against anyway, because they're already where they need to be.

So specific information on known attack paths is an interesting conversation, because part of the SMS 2FA security is the belief that while 1-off SMS 2FA attacks are possible, they generally don't scale, and so that puts a high cost on carrying out the SMS 2FA, or informs a limit on the value that can be protected by SMS 2FA.

So, good for reddit? Maybe yes. Good for your bank? Maybe not, but maybe yes, depending on the diligence of the customer, the robustness of anti-fraud measures, and the cost of fraud insurance.


> So, good for reddit? Maybe yes. Good for your bank? Maybe not, but maybe yes, depending on the diligence of the customer

Good for Instagram? Maybe no, without much dependence on the diligence of the customer.

https://motherboard.vice.com/en_us/article/vbqax3/hackers-si...


Alrighty then. Thanks for the enlightening read.

SMS hijacking? Really?

How is it that Reddit’s security team is continually learning security lessons that have been common knowledge among non-technical people for 5+ years? They seem to treat their production systems more carelessly than the average person treats their Nintendo switch account.


For example GitHub Enterprise account does not differentiate between security levels of token and SMS based 2FA so the issue seems more widespread.

This is not common knowledge among non-technical people. In fact, many non-technical people don't even know what 2FA is.

Additionally, it can take some time to change security standards in a large company. Most companies that are not in high compliance environments focus their engineering efforts on features.


> been common knowledge among non-technical people for 5+ years

Who are these non-technical people that you know that are not only using MFA but also know that SMS is insecure for MFA?

Rather than putting them down I'm happy they're willing to share and bring knowledge, that some communities already know, to even more people.


> Who are these non-technical people that you know that are not only using MFA but also know that SMS is insecure for MFA?

Anyone who reads pretty much any mainstream newspaper? At this point it would be easier to name mainstream media publications that haven’t covered this issue extensively. E.g. just google:

site:nytimes.com sms hijacking

site:wsj.com sms hijacking

site:latimes.com sms hijacking

Not to mention the fact that it’s been discussed on Reddit itself hundreds of times. And on the front page of HN dozens of times as well. E.g.:

https://news.ycombinator.com/item?id=14480191


> common knowledge among non-technical people

You should talk to non-technical people when you get some time.


Non-technical people should not have admin credentials on a top-20 website, ffs.

How does SMS interception actually work in practice? Wouldn't this require physical access to the phone/SIM, or are there any known remote exploits?

How about "teenager calls phone company, gets number reassigned"? That's the level of assurance we're dealing with in SMS.

I was able to go into a mobile phone store and ask for a new sim for phone number x and get it without any identification. All I was asked was to sign my name and pay some small amount of money.

Would someone kindly explain how a SMS can be intercepted during 2FA and how/why tokens otoh are safer?

A friend and I were brainstorming the design of a fraud prevention app/startup just this week and we naively thought SMS would be the way to go. Yikes!


Either some sort of ss7 exploit [0] or the attackers socially engineered the cell service provider [1]. This happened to some big YouTubers in the past [2].

[0] https://www.washingtonpost.com/news/the-switch/wp/2014/12/18...

[1] https://www.theregister.co.uk/2017/07/10/att_falls_for_hacke...

[2] https://www.youtube.com/watch?v=caVEiitI2vg


You don't need ss7 0days.

One one hand it's hilariously insecure to begin with, especially now lots of it gets trucked over the internet. On the other hand, there's a number of companies selling access and associated services for trivial amounts of money.


SMS ties security to phone numbers. Phone companies can trivially move numbers to different people, accounts, and SIMs. When you rely on SMS for security, you are relying on the customer support staff of giant mobile phone companies for your security.

There are other more technical weaknesses with SMS, because the phone networks themselves are also insecure. But the big issue is phone companies themselves.

Don't use SMS 2FA.


Google pretty much foists it on all our employees.

It does, and that is annoying, but you can and should turn it off.

In what way? For its employees, it uses physical keys: https://krebsonsecurity.com/2018/07/google-security-keys-neu...

Consumer/enterprise Google accounts have supported U2F and the Google Authenticator app for quite some time. I have SMS completely eliminated as a factor on my GMail account (Yubikey primary with codes from the app as a fallback; no way to authenticate via SMS at all).


For Google, it's not a text message though, is it? IIRC, it's tied to your Google account and not to your phone number.

They enforce it via SMS.

It should be noted that using SMS as a second authentication factor is always more secure than just using a password. As long as you understand its limitations, it makes things more secure.

Where so many companies go wrong with SMS “2FA” is by treating it as an alternate authentication method, rather than as a second factor. If you can reset the account password over SMS then you’re boned.


It's not more secure if you use SMS as a 2nd factor and then reuse passwords everywhere, which is a thing a lot of people do.

Yeah, because not using a second factor is less secure than using even a middling one?

I'd suggest that it is, but not as secure as alternatives others have discussed.

.. which isn't to suggest either password reuse or SMS TFA is a good idea.


I'm not sure I fully agree, in a situation where you reuse passewords everywhere, having SMS 2FA on top of it is still more secure that no 2FA at all, it makes it harder/more costly to break through.

My understanding of it is if you can use anything other than SMS 2FA then use that and remove SMS, but if your choice if between no 2FA or SMS 2FA then go with SMS. Also, use it stictly as 2FA, not as identity.

This is not my area of expertise so if I'm wrong I would genuinely like to understand why.


I don't think I'm being clear. I'm saying that in the presence of SMS 2FA, people use weaker passwords than they would otherwise. The SMS 2FA itself weakens their security: they remain vulnerable to targeted attacks and gain a vector for targeted attacks.

Do you have hard data?

My feeling after supporting end users on and off since '95 is they won't differ if they have 2FA or not.


You find examples like this all over the place:

https://security.stackexchange.com/questions/49521/does-two-...

It gets worse if you search around for people talking about how they use 2FA codes to protect their accounts when logging into services on public computers.

The perception of the security added by 2FA emboldens people to make all kinds of poor security choices.


Not necessarily. Some services allow you to reset your forgotten password with a 2FA token. If your 2FA token comes by SMS, your account security relies solely on SMS. Yes, this bad practice. Yes, you should not do that. Yes, you actually only have a single factor then. But some do it that way, so don't use SMS.

That’s the whole point of my second paragraph. SMS 2FA is fine, but SMS recovery masquerading as 2FA is not.

> Some services allow you to reset your forgotten password with a 2FA token

Don't do this either. It makes 2FA into 1FA.


As an example, I recently got a letter from T-Mobile (my carrier) saying they noticed an increase in number transfer fraud recently and were urging everyone to set a PIN on their account to prevent unauthorized transfers. It does happen.

> When you rely on SMS for security, you are relying on the customer support staff of giant mobile phone companies for your security.

Which only happens regularly in US? I have yet to heard as many cases in EU or Asia. Where you are require to proof yourself before any of these customer support staff can alter these information.

>There are other more technical weaknesses with SMS, because the phone networks themselves are also insecure. But the big issue is phone companies themselves.

I have beeb wondering if these are patchable? Or requires Hardware replacement?


There is poor security around the mapping of phone numbers to physical devices.

Take a look at this post for more information on one way it happens

https://theantisocialengineer.com/2018/07/23/sim-swap-fraud-...


Google for "port out scam" for one way SMS is vulnerable.

If you are using SMS based 2FA, understand the risk:. "Already having our primary access points for code and infrastructure behind strong authentication requiring two factor authentication (2FA), we learned that SMS-based authentication is not nearly as secure as we would hope, and the main attack was via SMS intercept. We point this out to encourage everyone here to move to token-based 2FA."

Alright, 2FA tokens came up the other day on HN and now we have this. Time to make the switch.

Yubikey 4 / Feitian looks interesting, but it seems it only works in Chrome with Gmail etc. etc.

Anyone have any thoughts on solutions that include Safari on Mac and/or iOS? The NEO claims NFC support but I doubt that works on iOS.


2FA does not usually mean U2F hardware such as Yubikey. Reddit does not support hardware keys.

Most of the time, 2FA means using a token generator, such as Google Authenticator, Authy, or similar. They are just apps. This is much safer than SMS because one would need physical access to your unlocked phone to generate a token.


TOTP has a downside when compared to SMS as well.

In cases where server security was breached and databases (or database backups or dumps) were accessed, if the TOTP seeds were part of the database (not sure how likely that is, but I'm guessing it's likely), then TOTP is doing nothing for security.

TOTP protects against things like credential stuffing and weak passwords, and is safer than SMS (no hijacking/intercepting), but for database security breaches things aren't so cut and dry.

I wonder if there should be a TOTP-like app which you still register with a site when you first log in or create your account, and which codes are sent to when new logins are needed, but which uses a more secure communication channel than SMS. This gives you the best of both worlds, no? One-time codes not generated from a single plain text seed, communicated to a known client over a secure channel, to prove the initial user is still in possession of the known client?


The distinction you're drawing here isn't meaningful. Outside of securing an application's database, a TOTP key has no value anyways. If you have a persistent attacker who quietly owns your database, no 2FA is doing anything for you; the attacker can log in whenever they want anyways. Once the database is compromised, you invalidate all the TOTP secrets, and they become worthless.

If you know of the database compromise, sure.

If your database contents are exposed (or a dump or backup is exposed) unbeknownst to you, those TOTP secrets won't be invalidated, and attackers which get their hands on that have more than they'd have if some other second factor method was in use which didn't rely solely on a long-term shared secret.

And since database exposures can go back a long way (like this one), and TOTP secrets aren't normally rotated over time, then this difference in second factor methods seems interesting.


If you don't know of the database compromise, the technical term for the steady state you are operating in is "owned". 2FA isn't doing anything for you.

If you don't know of the database compromise (which could have been a backup or old dump, and not necessarily a live system breach), you may be "owned", but unless the TOTP seeds (or equivalent 2FA master keys) are in that db dump, 2FA most certainly is still doing something for you...

This is bit tautological: yes, if part of your stack is specifically compromised at a point-in-time, other parts of your stack were not necessarily compromised.

But since your user auth flow needs access to the unencrypted TOTP seeds, and user info generally lives in a database, for the majority share of infrastructures a compromise of the DB will compromise all TOTP seeds.


Since TOTP seeds are long-term shared-secrets which are all that is needed to complete 2FA, and rarely/never rotated unless someone knows they were compromised, even years-old database compromises (old backups, etc) means current TOTP 2FA is also probably compromised.

Which is not true of some other 2FA mechanisms (which is my point - I never thought about it before, but TOTP has a specific failure mechanism that not all 2FA mechanisms have). I still prefer TOTP to auth codes over SMS, but I wish there was something better (TOTP with seed rotation, auth codes over more secure channels, etc).


This is pretty silly. If a TOTP secret had value outside of a single application/database, this would be worth thinking about. But they do not; they are a binding between a device and a particular application. If either the app or the device are compromised, so is the relationship, as is (most likely) all the data you were trying to protect anyways.

Let's say a service you still use loses a database backup from 2 years ago, and they don't realize it was lost.

Scenario 1: The service stored passwords in some way and TOTP seeds in the database and uses TOTP to authenticate users.

Scenario 2: The service stored passwords in some way and uses some other 2FA mechanism (like unique codes over some secure channel)

You seem to be claiming there is no different between the two, where as I think I'd prefer scenario 2, because in scenario 1, if my password is obtained somehow (the password was not stored securely, I used a weak password, credential stuffing matched my password from other site leaks against the email in this database leak, or any other way) then the attacker can log in to my account on the service right now and bypass TOTP 2FA. In scenario 2, 2FA is still protecting my account.


I'm saying there is no practical difference between the two, because there isn't. You can keep layering feature after feature on any part of your security model, but time and energy is finite, and complexity adds risk, and burning time, energy, and complexity on this is bad engineering.

I disagree with "no practical difference" (since I demonstrated one), and layered approaches to security are usually encouraged, not dismissed as "bad engineering".

Regardless, I will keep this 2FA difference in mind going forward, since it is interesting to me; of course, you (and others) are free to ignore my thoughts if you so choose.


A standard called SQRL is similar to many of the concepts of U2F and can be done with an app scanning a QR code.

https://www.grc.com/sqrl/sqrl.htm


Looks like SQRL is aiming higher than just 2FA, they try to be the entire login process. Thanks for the link, but not sure it's quite what I'm looking for.

YubiKey supports TOTP/HOTP (the protocol behind "Google Authenticator" and similar). Clients for the YubiKey are available for Linux, Windows, macOS, Android and iOS. You can connect the YubiKey via USB (regular or Type C) and (depending on the model) via NFC (interesting on Android and iPhones). All clients are open-source afaik. There's even a CLI tool which is useful for workflows with e.g. Alfred or similar.

You can protect your TOTP/HOTP seeds on the YubiKey using a PIN (which I would recommend anyone to do). This is supported in all the Yubico Authenticator clients.

But: When your YubiKey is stolen/lost, your TOTP/HOTP seeds are gone for good. Make sure you have recovery codes stored in a safe place, e.g. your password manager.


yes, but you can use yubi authenticator app to keep the "app authenticator" codes on a yubikey instead of directly on the phone, and use the phone to see the codes. Not sure if it can work on ios but on android is pretty good

Whoah, your choice isn't between Y4's and the Feitian keys, but rather between Yubico's much cheaper U2F key and the Feitians. Most people shouldn't buy Y4s.

I got the alert to change my PW. I had had the same PW for 12 years!

Edit: 12 years, not 13.

-------------------------------

Account credentials from 2007 compromised

from reddit

[A] sent 35 minutes ago

Hi,

TL;DR: As part of the security incident described here, we've determined that your account credentials may have been compromised. You'll need to reset your password to continue using Reddit. Details below.

On June 19, Reddit was alerted about a security incident during which an attacker gained access to account credentials from 2007 (usernames + salted password hashes).

We're messaging you because your Reddit account credentials were among the data that was accessed.

If there's a chance the credentials relate to your current password, we'll prompt you to reset the password on your Reddit account. Also, think about whether you still use the password you used on Reddit 11 years ago on any other sites today. If there's a chance the credentials relate to the password you're currently using on Reddit, we'll make you reset your Reddit account password. You can find more information about the incident in the announcement post linked above. If you have other questions not answered there, feel free to contact us at contact@reddit.com.


Has reddit been live for 13 years? Jeez. I thought my 11 year account was old.

I got on reddit in 2005, before you could comment.

Got on thefacebook as well in 2005 because the college kids in my classes (taking for fun as an adult) told me I needed to be on there to get invited to parties and so they could write on my wall. Good times.


With those comments and your username, you can tell people to CD .. off your lawn.

>In other news, we hired our very first Head of Security

wow...


> we hired our very first Head of Security, and he started 2.5 months ago

He started before the hack happened


I'm pretty sure he didn't fix every security problem on his first day of work. Once a company gets big enough every change ends up being an ordeal of organizing all stakeholders and getting them to agree and giving them time to update their own systems so they won't be broken when the change happens, etc...

The scary part of this is probably for people that had accounts on reddit in 2007 but later deleted them, or just completely forgot they existed. Reddit's not going to be able to contact the owners of those accounts.

Did you have an account 11 years ago? Did you vote on anything embarrassing, or send any compromising messages? How sure are you?

I don't even know the answer to those questions for myself.


Reddit can't contact them because Reddit didn't require an email address to create an account back then.

If they did require an email address, they could restore their database backups to retrieve that information.


Even then, that would require those people to still have access to the same email address they used to sign up over 11 years ago. Even Gmail probably wasn't in very wide use yet at that point, it was invite-only until February 2007.

i use email addresses set up 20 years ago, they are just redirected to my current mailbox or pulled by current mailbox

Reddit still doesn't require an email, the signup form is just a well executed dark pattern - you can hit next and skip providing an email.

I wouldn't call a form requesting your email (the only entry field on a dedicated page) a "dark pattern". They out right imply an email is necessary and knowing it's not requires knowledge otherwise or accidentally clicking "next".

I've made a couple of reddit accounts in the last 1-2 years and it was always explicit that an email was optional. Is this all since the recent redesign?

The only indication that an email is not required is that the email field doesn't contain a blue dot, as opposed to the username/password fields. However those are on the next page, so you have no indication that it's not required. That's actually on the new design, on old.reddit.com there is not even that indication.

I see. that's a shame, last time I used the register form it was all fields on one modal popup and said email wasn't required.

Why does everything have to be a "dark pattern"?

So now we're at a point where asking for an email is a privacy violation and a dark pattern, but not asking for an email is a security issue and a dark pattern?

Yup.

Welcome to trade-off oriented programming.


I have trouble calling this scary when the data we are talking about is upvotes and private messages associated with an alias (not necessarily associated with an human) that are over a decade old.

From the standpoint of the person who took the data it is likely boring enough that it's not even worth the effort to restore the database.

Given the existence of more pressing problems I really can't do more than shrug.


On the contrary, Reddit back then had much less moderation. What if it came out that a prominent CEO was a user of /r/jailbait or a similarly toxic subreddit? That person's life would be destroyed.

I got the email about an hour ago. My first reaction was embarrassingly hipster.

This incident report glosses over the depth of what access was given to focus on the user data that was compromised... but it sure seems like they got pretty deep:

* A complete copy of an old database backup containing user data from launch in 2005 through May 2007 including:

  -usernames,

  -salted/hashed passwords,

  -e-mails,

  -all content including private messages
* Reddit source code

* Internal logs

* configuration files

* other employee workspace files [?]


This is a serious breach and I'd suggest "gloss over" does not characterize Reddit's statement appropriately.

Given how the report is structured, it seems like the amount of leaked data is purposefully being hidden behind red herring info about SMS 2FA that is not important to users who want to know where they stand.

When this DB is leaked, there should be more than enough weak passwords to both pwn and dox many, many reddit users. Do we know the encryption scheme reddit used to encrypt their password database involved in the leak?

Also, how is it that Reddit gained a head of security 2.5 months ago? Who was in charge of this prior to that date?


> Do we know the encryption scheme reddit used to encrypt their password database involved in the leak?

At the time of this backup, it would have been SHA1. Here's the relevant hashing code:

https://github.com/reddit-archive/reddit/blob/4778b17e939e11...

Edit: reddit's confirmed this here: https://www.reddit.com/r/announcements/comments/93qnm5/we_ha...


While not unexpected, it is very bad for these users. This salt will do nothing to stop a cracking effort. A system with pairs of GTX 960 and GTX 1060 cards can easily check 12 billion hashes a second. This database is hosed.

randstr has this goldnugget of a comment:

    """If reallyrandom = False, generates a random alphanumeric string
    (base-36 compatible) of length len.  If reallyrandom, add
    uppercase and punctuation (which we'll call 'base-93' for the sake
    of argument) and suitable for use as salt."""
https://github.com/reddit-archive/reddit/blob/4778b17e939e11...

The function has specifically a flag for use as salt, but the hashing code does not actually use it. Whoops. Of course the loss here is not really that significant (~4bits of entropy), but I find it still bit funny.


Picking on the code bit more while we are at it, random.choice that randstr uses is of course not backed by CSPRNG so its not ideal for salts. Although I would be shocked if attackers would be able to exploit that in any way.

Kinda interesting is how they decided on specifically 3 characters for salt, which seems really low. Its not like the characters cost anything, why not 30 characters instead?


I used "gloss over" because I wanted to give them the benefit of the doubt. I can see an argument that 99.9% of the users are going to just care about what user information was stolen.

However, the fact that private messages were stolen is... I mean, it is just mind boggling to me. There has to be so much shit in there.


> other employee workspace files

If "workspace files" meant "home directory", that's the big holy shit moment. People keep all kinds of stupid shit in home directories. SSH keys. Browser profiles. E-mail cache. Private keys for TLS certificates. Logs. Logs with secrets. Literally anything that's supposedly secret and used to run things.

Put user home directories on an NFS mount and I can basically own your whole company.


> People keep all kinds of stupid shit in home directories. SSH keys.

Are you implying that it's stupid to keep SSH keys in one's home dir?


If unencrypted, yes. Ideally your home dir is encrypted [so that only the user can unlock and use it]. (The truly paranoid will keep their private keys on an encrypted thumbdrive and remove it from the machine after authenticating the key into their keychain) But most of the time when I see network home directories in a company, it's just flat files on an NFS or CIFS share, which is bonkers.

> The truly paranoid will keep their private keys on an encrypted thumbdrive and remove it from the machine after authenticating the key into their keychain

Does not necessarily help when the machine is compromised. Using one key per machine and storing it in the corresponding home directory seems safer to me.


If you compromise the machine, you can attack anything on that machine. If the keys are in the home dir on the network, you can attack the network. If the keys are in the home dir on unencrypted local disk, you can attack the disk. If the keys are in a user namespace, you can attack the user namespace (which includes processes and mounts).

If the keys are in a thumbdrive and in a keyring, you can only attack two things: 1) the user namespace, 2) the thumbdrive's mounted contents while it is inserted _and unlocked_. This limits the scope of attacks. When the keys in the keyring expire ("-t life" option to ssh-agent), you can't even attack that.


> The truly paranoid will keep their private keys on an encrypted thumbdrive and remove it from the machine after authenticating the key into their keychain

While the more secure organisations disable USB in the OS / BIOS and glue the ports shut to prevent anyone using them, to prevent against data loss by employee.

Source: worked in two different organisations that literally glued the USB ports shut.


The truly paranoid will implement an SSH CA that will sign the key's cert for a limited length of time, using u2f or similar to authenticate the user at the time of signing.

Not OP but it is inevitable, if unfortunate, that SSH keys go in most peoples' home folders

If it's anywhere other than on your personal machine, yes. OP was referring to NFS home directories (or otherwise home directories on servers), which I think is likely the only feasible vector to access people's "workspace files" in bulk without hacking every employee's machine individually.

Your SSH key should only exist on your desktop/laptop, never on a server. Use ssh-agent (and agent forwarding, when needed), which was designed for this.

The best approach is to use yubikeys and pkcs11 along with your ssh-agent so the private key never exists on disk at all, but at the very least, using vanilla ssh-agent is an imperative.


And if corporate policy demands all personal files live on the network and corporate computer policy sets the home directory there? This is common in enterprise.

So like, /Users/ken on my mac is an NFS mount? That's horrifying.

Yes, such enterprises are stuck in the 1980s and deserve what they get.

(Also, "personal files" shouldn't mean SSH private keys in any stretch of the definition.)


Unfortunately, ssh-agent does not have a way to discriminate against which remote is requesting authorization. So if you have dozens of keys and the correct key isn't among the first to be presented, then the remote server is likely to kick you for too many authorization attempts.

You can use ssh_config to limit which agent keys are offered to which remotes. https://superuser.com/questions/272465/using-multiple-ssh-pu...

Yes, but configuring that for hundreds of remotes is infeasible.

For what reason was a decade old backup kept online for? That is insane. If they have hygine that poor I'm really worried about what other problems they have.

Good grief. You're talking about a decade old data set.

Try this: Go to your own site backup for whatever you've got, be it a personal disk backup or something you made for a customer or friend or whatnot. Now, tell me which files might contain sensitive information to third parties. I'll wait.

This isn't "hygine". This is "we have a 11 year old backup mounted somewhere that we all forgot about and we honestly don't know what's in it". Yeah, it sounds dumb, but it's not reasonably avoidable by internet pontification regarding "best practices" unless your "best practices" involve eidetic memories or time machines.


It contains personal data, so it's subject to the GDPR, so the law in the EU requires those "best practices".

No doubt true. But the GDPR didn't provide time machines. I was arguing with the "hur hur dumb noobz" tone of the grandparent post, not with the need to store data responsibly. Doing things right is hard. Especially so when you need correct mistakes made in your startup youth.

It's simple. If it's a couple of years old and I haven't accessed it then it gets archived offline. Metadata about it is kept hot so I can track what I have. I might be a bit sloppy personally and have a limited amount of 4 year old stuff still internet connected but certainly not anything approaching 10 years and it's sure as hell not large archives of other people's data I have a duty to protect.

There is just no excuse for that, it serves no business purpose, ancient backups that have no recovery value should not be online if they are kept at all. This incident shows an appalling lack of care by reddit technical leadership. Obviously they are not systematically tracking and reviewing the data they keep. Given this incident I would not be the least bit surprised if they have copies of this and that all over the place with no awareness or oversight.


> It's simple. If it's a couple of years old and I haven't accessed it then it gets archived offline

How is that remotely simple? This is a medium size company. They have thousands or tens of thousands of storage devices mounted "online" in some way. How do you purport to audit every single one of them to determine when the "last access" time was for all the relevant data?

I suspect, as mentioned earlier, that your answer is going to involve a time machine to go back to 2007 and make sure reddit was doing things "right".


It's one thing to get owned by 0 day and lose the stuff you were working on last month. If you lose stuff from 10 years ago you absolutely had it coming.

The way you protect old data is by routinely auditing what you have. You make sure each department is on top of organizing its data. If you're not sure what it is, you offline it. It can always be brought back online if necessary. Even lowest-common-denominator schemes like ISO 27001, a system designed to allow management that doesn't even know how to turn on a computer to manage information security, covers this basic idea. It would be one thing if a non-technical department had leaked some ancient folder full of reports containing some sensitive data, but this is a database dump of one of the most highly trafficked sites on the net. Reasonable people should expect the custodians of those sorts of things to know better. To anyone with technical knowledge, minimizing your data exposure should be as natural as breathing.

And yet again this time we get the usual "the attack was so sophisticated" refrain. Oh, the defenders were so careful, and tried to take every precaution for sure! The attackers hacked the 2FA! If that's true why didn't the attackers get the 2018 data? Frankly I don't believe the Reddit management. They probably left that old database dump on some old system they forgot about that tons of people had access to.

How many breaches to we need to remind us to be aware of what data we are managing and take precautions? How many more is it going to take before we collectively stop being so careless?


> If you're not sure what it is, you offline it

I've literally never worked at nor heard of an employer that tried this. You have a case study example? You seriously think IT departments are in the business of finding an archiving the contents of every random PC on the network?


No, of course I don't believe IT departments do that. It's management's responsibility to make sure someone is responsible for the data on every random PC on the network. Management is responsible for putting systems in place to manage risk effectively. Random PC users need to be compelled to comply by policy and enforcement because otherwise they usually don't have the knowledge or incentive to do what must be done.

People working in development or operations on the other hand, should instinctively know and do what must be done with their own data. And reddit didn't do that. Management failed and even worse the technical leadership inside the company was directly responsible.

Remember this next time you're working at a place with a poor management of data and a culture of indifference. Do something about it, sound the alarm instead of sitting on your hands waiting for the inevitable leak.


So... who exactly does this again? You seem to be "solving" this problem by turning it into a simplified academic exercise. Real security happens in real companies with real people.

In comment from the admin: « In other news, we hired our very first Head of Security, and he started 2.5 months ago. » No comment.

«Old salted and hashed passwords» This sentence mean: All hashed were readable. It also mean, if they are still needed on their servers, that they are probably still in use. It would had been easy to salt this hashes.

First fix holes, then redesign...


I keep telling my bank SMS 2fa is bad but they say it is not. Many banks replaced tokens with SMS unfortunately.

Your average mom & pop can't handle TOTP 2FA, they will inevitably need to reset it when changing phones.

FOB based token generator that you can attach to your keys is pretty braindead. Some banks used to issue them but have switched to SMS/EMail.

Around here it is still mandatory, that or something on the SIM card of the phone that 1) displays a popup that you can match with the bank login screen and then 2) type a separate pin code on your phone to unlock.

I don't know any bank that supports token-based 2FA and most support both SMS and email based 2FA. Email is a terrible 2FA method since most users re-use passwords.

One way to help protect you is to visit your carrier's retail store and have them turn off online access to your account and require all changes to your account to be done in person with a valid government ID. This should make it more difficult for number porting attacks but they can still sniff the SMS message when goes over the cell network. As far as I know, mobile network control messages aren't protected.


Several British banks use Chip+PIN cards to provide a token — not necessarily for login, but for authorizing a transaction.

Like this: https://c7.alamy.com/comp/CYGATP/online-banking-security-chi...


>I don't know any bank that supports token-based 2FA

Wells Fargo supports RSA SecurID tokens.

Source : I have one.

However, they also support SMS as an alternative. I'm not sure if SMS can be disabled..


I have several hardware tokens for banks; the one of the Rabobank is as big and heavy as some phones :/ It has a camera and color screen built in. Secure but jikes I don't want to drag it around. The rest all do SMS only; I have never heard of 2fa via email; that sounds like the worst idea ever.

If you're using PayPal, they support proprietary one time key and SMS, a guy came up with a method to use any 2FA app with paypal: https://medium.com/@dubistkomisch/set-up-2fa-two-factor-auth...

Just tried it myself, it works.


You mean they tell us 1.5 months after the event that our emails and passwords might be compromised?

If the logs contained IP addresses, they could be used to correlate multiple accounts, leading to throwaway accounts being doxxed.

It doesn't sound like IP address data was compromised, but I wouldn't be surprised.


Could probably also correlate by password. I'm sure lots of users re-use the same password at least for their throwaways.

They certainly did 10 years ago.

So what's to stop a hijacker persuading the website to take off 2FA or switch you from TOTP to SMS.

Seems just as possible as hijacking your phone.


While everyone is piling on how SMS 2FA is oh so bad, it is worth noting that it is supposed to be the second factor here. So what happened to the first factor is the obvious question. Someone was using weak/compromised password or got social engineered would be my guesses, neither which are very good options.

This was also my first thought when reading this. It almost makes me wonder if it was really a SMS exploit at all — when someone has the user, pass, and 2FA code, that sounds to me like the target clicked on a convincing URL and readily supplied all the things their attacker would need.

Good point. I'm willing to bet it was spear phishing. Really, really effective, and I don't believe there is a solution besides education and vigilance.

Was the notification to Reddit users about the incident, sent from noreply@redditnewsletters.com ?

>In other news, we hired our very first Head of Security, and he started 2.5 months ago.

Uh huh.


Legal | privacy