Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Another concept that I don't understand is that USA's social security number has to be kept secret or otherwise your identity can be stolen. How that is even possible? Doesn't your employer needs it?

I think adopting this framing is what makes it really bad. Your identity cannot be stolen. The whole concept of "identity theft" is bullshit intented to shift blame. It only so happens that some entities are incompetent at verifying people's identity. That shouldn't even be your problem, as you have no influence whatsoever on how others check the identity of people, so you should not in any way be responsible for dealing with the consequences if someone thinks that you owe them something just because they believed someone else's claim that they were you.



view as:

> some entities are incompetent at verifying people's identity

Some entities are incompetent at verifying identity because some people are very loud about making sure a a modern ID verification system doesn't get built, because the ability to commit fraud is a civil right or something.

We need .gov smartcards. Or at least a .gov OAuth provider. Instead we are in the dark ages of shared-secret numbers (SSN, credit card, etc) and scans/faxes of easily photoshopped printed cards.

There is an argument this should be a private responsibility. For interactions on an existing account, this makes some sense - banks should be shipping hardware tokens, for example.

The big issue with identity theft is criminals opening new accounts under other people's identities, and this is a serious problem because the government will enforce that debt against people who didn't actually incur it. IMO it is government's responsibility to demand better proof of authenticity before signing up to enforce it.


> We need .gov smartcards. Or at least a .gov OAuth provider.

That's not really a good solution either, though, because that requires trust in an essentially unverifiable system and the entity producing it.

> banks should be shipping hardware tokens, for example.

No, they absolutely should not. That's like saying banks should send out staff to take care of signing stuff for their customers, and then insisting that the government should enforce whatever their staff signed against the customer. That's a completely broken security model.


What do you propose?

I'm not sure I really propose anything, it's a hard problem. Maybe a more decentralized web-of-trust like identity system would be a good long-term goal?

As for authenticating orders to your bank: You should be able to use any compatible product/software you like to sign orders to your bank with your private key. The bank should not have the ability to fake orders to themselves (see also the Wells-Fargo fiasco).


> You should be able to use any compatible product/software you like to sign orders to your bank with your private key.

If a .gov smartcards are a pipe dream, this is a pipedream^10. Private companies will throw a fit if they can't lock people in.


>web-of-trust like identity system

The most important thing about a government-level identity system is extreme difficulty of obtaining any identity other than the one you were born with. It seems inevitable in a web of trust that fraud rings would emerge to manufacture identities for those looking to escape debts, criminal convictions, etc by some combination of tricking and bribing people to sign authentications.

>You should be able to use any compatible product/software you like to sign orders to your bank with your private key

I'm not sure you should be able to use, say, a poorly written IE extension on your unpatched Windows XP machine. Something federated would be great, where any manufacturer can technically make something compatible, but it has to meet a FIPS standard or something.

Keys could be generated onboard, and then you upload your public key or something.

We're getting way ahead of ourselves - banks are extremely hesitant to use anything better than secret numbers. I'd rather a shitty 2FA implementation than that.


> It seems inevitable in a web of trust that fraud rings would emerge to manufacture identities for those looking to escape debts, criminal convictions, etc by some combination of tricking and bribing people to sign authentications.

Which could possibly be counteracted by attaching a certain amount of liability to a signature? Also, you potentially can detect fraud rings. But, as I said: I am not really proposing anything.

> I'm not sure you should be able to use, say, a poorly written IE extension on your unpatched Windows XP machine.

Yes, you should, absolutely. Not only is it impossible to enforce anything else, but that's just your own responsibility, just as locking your own home or car or whatever is your own responsibility.

> Something federated would be great, where any manufacturer can technically make something compatible, but it has to meet a FIPS standard or something.

Federated? You mean an open standard? Yes, that would be the idea. But none of the FIPS crap, that never works. Certification only prevents improvements, security fixes and the like, and usually only guarantees a minimum level of security that's worse than what would happen without it.

> Keys could be generated onboard, and then you upload your public key or something.

No, keys are generated however the customer wants to generate them. The customer supplies a public key to the bank, and it's the customer's responsibility to keep the private key secure. If they think a smartcard from a specific vendor is the solution they trust, that's fine, more power to them. If someone else trusts more their own software on an airgapped raspberry pi, they should be able to do that.

> We're getting way ahead of ourselves - banks are extremely hesitant to use anything better than secret numbers. I'd rather a shitty 2FA implementation than that.

I don't. The more technically complicated the authentication system is, the harder it is to make people, and especially courts, understand what the failure modes are, and thus, who should be liable when something goes wrong. Lists of random numbers are relatively easy to understand (especially the fact that a bank obviously knows the "secret" numbers and thus cannot really prove that they got it from you).



That is just an idiotic idea, especially the voting part.

Wonderful. A downvote and a "that's stupid" comment.

Is there a reason you can't articulate your views on a system that is live and working successfully?


I can, and it isn't. It cannot be.

First of all, I wrote above you in this thread, regarding smart cards for government ID:

> That's not really a good solution either, though, because that requires trust in an essentially unverifiable system and the entity producing it.

So, maybe you want to address that instead of just pointing out that people are in fact using a system that obviously has this problem that I already mentioned.

As for why electronic voting or electronic counting of votes is a terrible idea (I would have thought everyone on here knows that by now): It's impossible to audit. Elections are the failsafe of a democracy that has to be able to remove a government from power that tries to prevent being removed, and they are about the control of huge amounts of recources. Therefore, you cannot have trust in a small minority as the basis for its reliability, you have to have a system that is very hard to corrupt even by the government. A government server counting votes is the exact opposite of that.

See also: https://www.youtube.com/watch?v=w3_0x6oaDmI

The fact that, so far, noone has seen any problems is completely missing the point. First of all, the whole system is set up such that it's really difficult to find out if something went wrong/was manipulated (that's just the nature of electronic voting). Secondly, most elections aren't all that problematic. In times of peace and prosperity, there usually isn't much contention over the results of an election. What makes a voting system good is when it's able to stay reliable and trusted in times of political unrest.


Thank you. I appreciate you taking the time to explain your view.

Further, I concede that you are correct as far as voting goes. That doesn't mean the card is useless. It serves other purposes quite well, even if imperfectly.


Well, but does it? How do you know?

Voting obviously is the biggest problem, but other uses have similar problems. Especially in terms of how you could ever figure out if it's actually not working well. You say that it serves other purposes quite well--how do you actually know that? Is it just because noone has demonstrated yet that, I dunno, the smartcards have a backdoor that is actively being used to sign stuff in other people's names? How would you know if that were the case? How would you convince a judge that you didn't sign some document when they ask you to explain how it comes that their computer tells them that you signed it?


> How would you convince a judge that you didn't sign some document when they ask you to explain how it comes that their computer tells them that you signed it?

How would you convince a judge that someone was holding a gun to your head as you signed a document?

How would you convince a judge that you didn't sign something when they ask you to explain how it comes that the signature on the document matches yours exactly?


The thing is that both of those scenarios are things judges can be expected to understand perfectly well, and have some clue how to approach evaluating the claim, and also, in both cases, there generally is a reasonable risk that any such attempts leave some form of evidence, deterring people from even trying it.

With a smartcard, there is nothing of that sort. It's just a bunch of electronic numbers and "the blackbox says you signed it!", there isn't really anything there to investigate, plus judges can in general be expected to not have even the slightest clue how to evaluate claims about IT security, so more than likely you'll end up with a situation where judges simply accept the government-mandated assumption that what the blackbox says is to be trusted.

It's the same fundamental problem as with electronic elections: The actual process is necessarily completely removed from human perception/observation, and therefore ultimately must be trusted blindly if it's not to be rejected outright.


Legal | privacy