I worked on something similar in the 90s. One of the problems that we found was that people dislike having to answer their phone late at night. (Or making phone calls, for that matter.)
The biggest security risk we found wasn't the second form of authentication; it was making sure it was just as hard to add other phone numbers. Then there were instances of disconnected phones, dead cell phones, etc to deal with.
If these guys can make it all work successfully and be profitable, more power to them; it's an uphill battle for sure.
I'm not working on it any more, so I'll offer some free suggestions to them:
* Offer other means of authentication - biometric seems pretty popular, but you lose mobility. Rolling keycode generators are also nice (and not terrible to implement).
* Double-check your security at the data center. I was working on secure data storage, not email, so the problem set was slightly different. However, there was some technology we were looking at licensing that used a physical air-gap on a router to remove data from the network when it wasn't authorized to be online. Probably not economical for individual email accounts; but possibly useful for bigger clients. As an example, years ago I was talking to someone at a Proctor & Gamble research facility. All of their computers used detachable hard drives (well, Iomega Jaz drives, but still) and at the end of the day, every drive got removed from the computer and locked into a safe. That way if there was an intrusion, the data was totally inaccessible.
* Add an sms interface that would just text someone a keycode (similar to a rolling code generator, but would just require a cell phone).
I think late night phone calls are a lot more palatable now than they were in the 90s, when phone calls for many people meant loud noises in rooms all over ones home instead of a subtle vibration in ones pocket.
This is out of my field, but how do you all think this will be compromised?
My guess would be by spoofing the CEO's home IP & cookie to bypass the verification, based on this paragraph from the site:
"Plus, users only need to receive a verification call when they are logging in from an unrecognized computer. When logging in from a home or work computer, a cookie can be stored so that no verification call is required."
Definitely. I read recently that a certain model of Nokia from a certain German factory are in high demand because they make it so easy to clone numbers. So I'd imagine cloning the number is one way through, but I wonder if there's not an even simpler way in.
Assuming that they use a strong enough cookie (eg, containing a randomized unique key that's verified by the server) and all connections are made over SSL, it should basically be as resistant to remote attack as the person's cellphone (eg, SIM chip spoofing to get two cells on the network under the same call number).
I say that here, the weak points are the user and the cellphone; I think that the barriers to usage that the site sets up will most likely deter all but the most paranoid users from using it.
Unless you're incompetant, cookies aren't compromised by guessing them or by scooping them off the wire. They're compromised by cross-site scripting and XSRF attacks.
I know XSS and CSRF are security issues for the attacked site (had to deal with that stuff with MantisBT somewhat recently), but can you enlighten on how XSS or CSRF can compromise your cookies for a third party site? I didn't realize that was possible.
This is basically the same thing as MobilePass, which hasn't been broken to my knowledge, so I wouldn't expect a direct attack. Your suggestion is a possibility.
There are also a bunch of unknowns. Are there any direct attacks (SQL injection, privilege escalation, etc) on the StrongWebmail site? What sort of datacenter is it in (alchemy.net, which are HIPAA compliant, which should be pretty safe)? Are the challenges generated in a cryptographically secure manner? How secure is the CEO's home machine? Does the CEO purge cookies at the end of the session? Does it count if I can manage to redirect his e-mail elsewhere instead (which can be done with well-known DNS exploits)?
Part of the defense here is that the prize is small enough that it's not worth trying many of the more elaborate tricks (like attempting to break the PRNG for the keys, if any).
Interesting idea, instead of attacking StrongEmail itself, you target the CEO's ISP. If this is legit, I could see some network engineer at the CEO's ISP making a few routing changes, logging in (since from the text, it appears that the IP would be allowed, and a re-login would be required since the proper cookies aren't being presented) to claim the $10,000 prize, then fixing the routing changes.
"Here’s the thing, in order to get into a StrongWebmail account, the account owner must receive a verification call on their phone. This means that even if your password is stolen, the thief can’t access your email because they don’t have access to your telephone."
Great. Users will love receiving calls at all hours as script kiddies in Russia try to log in to their accounts.
"Break into my email: get $10,000. Here is my username and password. Username: CEO@StrongWebmail.com Password: Mustang85"
Great! Let's give it a go:
Error Logging In
We could not log you into your account because of the following error(s):
The username or password you entered is incorrect, or your account has been suspended/closed.
Indeed! Since the username and/or password are invalid anyway, why not make it a cool million dollar contest? Or for that matter, a billion? Perhaps it would be a trifle TOO obvious.
The CEO username and password work for me. I've checked the obvious stuff and they have that covered, so it won't be an easy $10,000. But I'm sure no system is 100% infallible :)
I'm guessing that as many people as are probably trying to log into his account, he probably doesn't answer his phone any more, or even use that email account. How would he know if the access he just authorized was him, or someone else?
Maybe if they texted him a code he had to type in, it would be more secure, but then it would be simple enough to brute force all of the codes, or find the seed and generation mechanism.
"In addition to protection from fraudsters, StrongWebmail.com protects against friendly-fraud where a boss or spouse snoops on your email. If one of these people tries to log into your account, you’ll receive a phone call alerting you that someone is trying to access your email (like a silent alarm)."
Seems to indicate that failed attempts would also text you?
It seems to indicate that, but I don't think that's the intention. When someone tries to log in (and succesfully passes the username/password stage), then you receive a phone call, because only your telephone numbers are registered to the account. It's implicit in the process.
users only need to receive a verification call when they are logging in from an unrecognized computer. When logging in from a home or work computer, a cookie can be stored so that no verification call is required."
1) Compromise targets computer using some known exploit
That's literally, like, an entire FRACTION of what an application penetration test costs!
They must really be serious!
[quick edit: I really hate talking about numbers here, because if you have some bootstrapped YC-style company and you're worried about security, I'd love to think you could reach out to us and not have us try to get into you for tens of thousands of dollars --- but for an actual security assessment with a public statement at the end of it this is way, way south of what the market pays]
Sell it to as many customers as you can find first, and then go to them for the extra $10,000. You get more money than you would have otherwise, and you also save innocent people from being unduly compromised. It's a win-win situation ;)
Kinda makes me hope that the person who does find this vulnerability just publishes:
"I have found a critical vulnerability in this application, which I will demonstrate under NDA to any reporter who requires verification. I will under no circumstances reveal this vulnerability to the vendor or to any other party."
So, based on the edit, how should small startups (or Open Source projects) reach out to security pros?
Some Open Source projects have millions of users, and so security is obviously a concern...I've noticed in our own project that we occasionally get penetration testing reports from security companies out of the blue (I guess because Webmin is high profile enough, and is potentially dangerous enough, to be on everyone's radar), but how would one get a new project or product onto the radar? Obviously, Open Source projects generally don't have 10k*N dollars to spend.
I ask because I've recently thought of doing something along these lines for our own stuff. Not because I want publicity, but because we really want to know about any issues.
I've never felt comfortable talking about this here before because I have zero interest in trying to make money off companies that need to conserve every dollar they've got. Been there. Recently.
It's really kind of tricky. I've been working with my team for the past couple months on an "Indie SDLC" (SDLC is the industry's jargon term for secure development); we gave a talk on it at C4 last August. Rentzsch may post the video someday, and I'll be sure to post it here.
I have a shortlist of things I think every company should be doing now on security:
1. Stop talking about security. (Don't be a target).
2. Train every developer on SQL Injection and Cross-Site Scripting, and --- if they're writing C code --- Integer Overflows.
3. Avoid a set of "features that always doom dev teams", including encryption, password storage, browser plugins that inject into the DOM, templating, installers, network listeners, and file upload/download.
4. Deploy the "rubber chickens" that make users feel safe --- SSL, big long random URLs, little lock icons, and if you really need to, something like Hackersafe (which is snake oil, but whatever).
5. Screw with amateur web pests --- use your own magic version of base64 with some of the characters swapped, use 3DES for something with swapped-around s-boxes, etc.
6. Make time in QA for every release to fuzz. Fuzzing is all you really need to do for security QA. Buy a copy of Burp Suite and run the "Intruder" on every page. Write your own fuzzer for any custom formats you handle.
7. For god's sake have a security contact and a /security URL on your site. Post a GPG key. Publish advisories when people find things. Act like you've handled this before.
(Obviously we fleshed a lot of this out, and the fact that we haven't posted it yet tells you that I'm not totally in love with where it is now).
Getting someone who bills N x $100 an hour to look at your app for free, even if it's open source, will probably be tricky. With the good firms (I like to think we're one of them), advice is free, so by all means reach out with lots of questions. If the question is "how can I get my app looked at without spending $50,000", well, that's a good question! There's probably something creative you can do.
Excellent checklist. Several things I never even thought of.
3. Avoid a set of "features that always doom dev teams", including encryption, password storage, browser plugins that inject into the DOM, templating, installers, network listeners, and file upload/download.
We're doomed. All but two of these are present in every one of our projects. (I kid about being doomed. Mostly. We do have 11+ years of being a prime target for attacks to give us some confidence that we're doing OK. But we do unavoidably have to handle most of those features that always doom dev teams.)
5. Screw with amateur web pests --- use your own magic version of base64 with some of the characters swapped, use 3DES for something with swapped-around s-boxes, etc.
Isn't fiddling with the encryption, without understanding, what got Debian into trouble a while back with OpenSSL? Perhaps I don't even know enough to know what you're suggesting when you say "3DES with swapped-around s-boxes".
Buy a copy of Burp Suite and run the "Intruder" on every page.
I'd never heard of Burp Suite before. I had no idea there was an automated tool for this stuff (and at a reasonable price). Awesome.
You're always going to be doomed. A talk we did recently for a friend of ours in Chicago was entirely this: instead of the OWASP "Top 10", which is rapidly getting outdated, a list of top scary features. It sounds at first like we're saying "you're not allowed to build these!", which is the effect we're going for, but really the value of knowing what these features are is:
* You can intercept them in the requirements phase and either refactor your design so you aren't as exposed to them (maybe you don't really need file upload, maybe you can use S3, etc).
* You can make sure junior devs don't get assigned the scary features (a big chunk of the stupid flaws we find are accompanied with the "oh that was the new developer who did that" excuse).
* You can triage those features in code review and QA.
There is a big but apparently subtle difference between "encryption you rely on" and "rubber chicken encryption" (you can see now why I'm not in love with our Indie SDLC yet). Yes, you never want to dick around with encryption for your single-signon tokens --- use GPGME or Keyczar or something. But once you've secured your app, you can add obfuscation to reduce the likelihood that people will jump on your mistakes.
And yes. If you sell a web app commercially, you should own a copy of Burp. For what it does, it's amazingly cheap (like $150). It is the industry standard web pest tool.
Wait, you have to receive a phone call every time you want to check your email from a different computer than usual? This sounds incredibly annoying - especially if someone's actually trying to break into your account.
>> "Plus, users only need to receive a verification call when they are logging in from an unrecognized computer. When logging in from a home or work computer, a cookie can be stored so that no verification call is required."
So all an attacker needs to do is get access to your home or work computer...
This is simply a re-application of device fingerprinting and voice verification that many banks are using to secure online banking.
There are many things that can go wrong here, based on my previous experience dealing with these systems, but most require client-side subversion, whether it is malware, cross-site scripting, or cross-site request forgery. (For example, finding a CSRF vuln that allows you to update the phone number to your own and compelling your victim to visit the link through a variety of techniques.)
Chase Bank does something similar. If your computer isn't cookied, then you are required to perform some sort of identity authentication (voice, text, email, etc). I'm surprised more important utilities like email don't follow this concept... it would at least weed out the brute force attacks.
A vulnerability in Google Mail that would let you break anyone's account? A six-figure finding. High-profile stupidity (like people with trivially guessable password reset questions) aside, nobody has more incentive to protect your email than Google, Yahoo, and Microsoft do. They're probably better at it than some startup is, too; they've sure spent enough on it.
I'm a strong believer that any system that contains anything meaningful to a user should incorporate reasonable two-factor authentication.
The problem is that folks in security tend to forget that they have to pay attention to the business side of things. It's e-mail.
A certificate plus pass-phrase, and a three attempt fall-back to mandatory phone authentication would be way more than enough assuming that everything under the hood was sound (DNS, SSL, etc ... which have all been shown to have weaknesses recently). Why hammer your way through a steel door when you break open the single pane window in the basement.
reply