Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

They didn’t say “completely useless,” they said “fairly useless.”

If you look at this case even briefly, you should come to the conclusion that the “security paperwork” is fairly useless.

An FTP server compromised because of a terrible password policy? No suspicious activity alerts of any kind? Executives who (based on their comments) are clearly ignorant of what makes software actually secure?

What is the paperwork able to prevent, if it can’t prevent such fundamental problems?



sort by: page size:

It's not pointless from a usability perspective, just pointless from a security perspective.

The article seems purposefully thin on details, but there are a couple of points here:

1. The owners/maintainers thought that it was possible to lock out specific people/clients, but this is obviously not the case.

2. The owners/maintainers think that it's impossible to have unauthorized clients using the system, but this is obviously not the case.

Either the owners/maintainers are incompetent, or the system is not functioning as it was designed to. This makes it all the more likely that 'nefarious' forces can infiltrate (or already have infiltrated) the system and snoop on users.

A system that is relying heavily on secure design should not be considered to be 'working' when it is not functioning as the designers believe that it should be.


Interesting case. Kind of rooting against the EFF on this one cause it's victim-blaming, even if they were useless at security. Dropbox and Slack have also been useless at security at different moments in their existence. Transmission was useless at security until last week, Linux Mint was recently called out for insecure practices too.

    We know that there’s no such thing as “perfect” 
    security, but when you are caught with bad practices 
    in a banner year for data breaches, you should be 
    dedicated to securing your users’ information

You say that as if to imply that documents managed by conventional office suites were in any way secure to begin with. You seem smart enough to realize that isn't the case.

I'm completely missing how his example of a Word document that can only be opened by approved users on approved hardware within the corporation is supposed to be a bad thing.

Honestly, that sounds pretty fantastic. I've been using 3rd party tools/extensions to do this sort of thing in corporate and government environments for years, but having the attestation go all the way down to the hardware level is a big value-add, especially with so much ransomware/spyware/extortion/espionage going on these days.

Can someone please explain to me how the author might see this level of security as a bad thing?


He doesn't say that it's useless but that it's too much hassle to use even for someone that works in security.

It works in companies because you don't get a choice if you did most people wouldn't use it.


Users being lazy about security doesn't excuse companies from being lazy about security. I don't know if the latter is true in this case, but the line of thinking you have presented is surely flawed.

From the article: "Proprietary security software is an oxymoron -- if the user is not fundamentally in control of the software, the user has no security."

I could not agree more.


The letter on their site is dumb. It's the complete wrong approach to security. They have an implicit assumption that v5 will be perfect and totally not going to be exploited right out the gate.

They would be better off focusing on securing their existing site. Log EVERYTHING, make sure you don't have any ways to inject SQL, make sure that if anyone can break out server side they can't get to anything useful. Just basic stuff.

That said, they don't owe anyone anything. It's all volunteer, but if you are going to do it do it well.


This is a great example of security nihilism. "This tool can't protect against every possible attack from every possible adversary, therefore it is useless."

Building safe, secure products at scale for real populations is a process of balancing multiple equities and addressing the most pressing and realistic threat scenarios. This always means building security protections that have theoretical failure modes. The real art is in trying to make those failures as graceful as possible while educating your huge, diverse set of users on the security properties of the product and in what situations they can rely upon it.

Doing this well is still something the entire industry needs to work on, but giving it a shot and building practical protections for real people is always a better option than throwing up your hands and giving up.


"It's friday and I want to go home" :)

But seriously, I'm sure it was made by people that just didn't stop to think about security and "it works, so we're done here" Then, as a business you're not going to try and fix it if the software already works. That would be pure cost.


They put in no safe guards. It was designed to give developers access to personal information.

What more do you want?

Criminal negligence is still criminal.


It's only not a security problem if the file never leaves that single user system. Which was definitely not a safe assumption, even then.

Yep.

One of the most annoying habits of computer professionals when talking about security is how we object to every idea by showing how a stupid/lazy end-user could render it useless.

It's not that users will never do that: it's that users can't get into secure habits if we paralyse ourselves into not providing reasonable tools.


Rather seems likely laziness hiding behind the guise of "security". Such a bad user experience is inexcusable.

If they're relying on multiple weak pieces of information like this for security, they still aren't secure, and now they just created a huge pain in the ass for any user of their system, as they have to somehow know all the pieces of information which are supposed to be secret. Huge-pain-in-the-ass security doesn't tend to work very well...

This is a valid response if you ask me.

There is a culture of half-arsedness with some businesses where they don't respect user's security and privacy requirements. This is partially down to plain old incompetance but in my experience it's usually down to the fact that if doing something properly and testing it properly doesn't add business value, then it's not done. At the risk of pissing people off here; that culture is prevalent amongst startups.

They screwed up, they're getting sued. They should have tested it properly.

If this was a public organisation that left everyone's files in an open skip overnight they'd get sued too.


They didn’t call it secure as per your initial quote. They say it is designed to have a small attack surface. You missed to acknowledge that security means different things for different contexts. Besides, it’s a free offering, clearing issues with insecurities other offerings have. If you want something to be more secure, you can point out flaws you find in the intended way (filing issues) which might help improve the situation. Calling it out the way you did (probably without trying the tool and even more likely without having substantial knowledge of better approachable alternatives in the space) doesn’t help at all.

Right. And it does not discuss those tradeoffs. Ignoring that every time steps like these were taken (Java applets, activeX, webgpu, outlooks enhanced emails) it was security disaster is not a good sign.

There is some mention of risks in the footnote 3, focusing on privacy, but that's not at all sufficient. And placing it in a footnote is telling on its own.

next

Legal | privacy