>This strikes me as a matter of semantics; does it really matter if I'm targeted whether they hacked my account or hacked Google?
I think is incredibly important. If your information is put at risk due to bad practices by Google/Yahoo/Apple/Facebook/whomever that's a problem to be taken up with the company. If you use insecure passwords and someone is able to access your information that way, then the problem is with your passwords, not with the platform.
>Think harder. Who has the root access to the servers holding the data?
As far as I'm aware, no one. Like I said, from my experience, accessing personal data and user information as an engineer required a lot of red tape and approval from 'the powers that be', and violating those rules would get you fired faster than anything else.
>Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
Here I agree with you, probably not (or very little). They obviously have public privacy policies, but you have no proof that they abide by those, and I don't know (and doubt that) they get audited or whatnot to make sure that those policies are followed. Which is why being an employee made me more comfortable. If nothing else, it meant I'd know ;)
> [...] which is not an attack on Google's infrastructure
This strikes me as a matter of semantics; does it really matter if I'm targeted whether they hacked my account or hacked Google?
> I'm honestly not sure if there is a single individual at the company who had that power.
Think harder. Who has the root access to the servers holding the data? Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
> It doesn’t matter. When we talk about “having access to user data,” we’re talking about a humans being able to look at, i.e, the user logs of a former lover and doing something bad with it.
Google is one of the few companies known to have caught, fired, and officially publicly named an employee who did something like this:
Most tech companies about which people don't routinely raise this type of concern have far weaker security controls against (and detection systems for) this threat model than Google does.
That said:
> But that’s not the real danger of Google having this data. The danger is in ML. Google’s entire business model is predicated upon using information about you to change your behavior, and to sell on the market predictions of your future behavior.
I'm of two minds of this. I'm not thrilled about how much data Google combines and unnecessarily insists on collecting in order to allow things like the Google Assistant and Google Maps to provide full functionality. At the same time, many of Google's assistance and search services are better than their competitors exactly for this reason. I primarily wish they were more transparent in this area with fewer dark patterns and more user control, with forcing users to pick between excessive data sharing and inadequate access to Google services.
Disclosure: I have worked for Google in the past, but not since early 2015. I certainly am not speaking for them here.
>Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
(Have worked at google in the past, may in the future, am not currently). You say this as though anyone at Google (or Microsoft or whatever) can go in and search for 'falcolas' and look through your GPS history.
I'm honestly not sure if there is a single individual at the company who had that power. I honestly think that the best thing Google could to is publicize their internal training and documents on personal information, because the regulations and such made me a lot more comfortable with giving Google the sort of amorphous entity my data, because no person is going to be looking at that data.
>, not Google (stories abound of individual GMail accounts being hacked).
One of these is not like the others, unless you're talking about something I'm not aware of. Hacking an individual GMail account requires guessing/taking someone's password, which is not an attack on Google's infrastructure (Unlike the yahoo, sony, apple, etc. examples), its an attack on a bad password.
> I don't think you can accuse Google of not taking security seriously.
I don't think they take my security, and the security of my data, seriously. They seem to care very much about their security.
> They are making a security tradeoff that allows governments and insiders access to private data. This is a legitimate tradeoff to make
No, at this point I don't believe that it is legitimate, any more than it's legitimate to sell an oven which will explode if the temperature dial is set above 600° ('just don't set it that high!').
Yes, there are people at Google who work very hard to secure Google's data; there are people at Google (e.g. Adam Langley) who care a lot about users' data. There may even be people at Google who are working very hard to change its course on user privacy.
But Google, the company, does not take the security of user data against privacy threats seriously: if it did, it would use a better architecture (note that Apple doesn't take user-data security seriously, either, since they can MITM any time they want; nor does Mozilla, nor does Microsoft: no organization's hands are clean, so far as I can tell).
Do you trust every single person at Google, and every single person at every third party company Google shares your data with, now and in perpetuity, to never abuse the data collected on you? Personally, my circle of trust is not that large.
Totalitarian Surveillance is here. In the west. Secure document releases aside, it's too easy to do to imagine a state actor not doing it.
Data breaches of differing severities occur every day, at nearly every company. I would have thought Yahoo was big enough and smart enough to avoid it; but no. Not Yahoo, not Sony, not security contractors, not credit bureaus, not Apple (a'la celebrity photo leaks), not Google (stories abound of individual GMail accounts being hacked).
> I mean, it is pretty ironic to have Google fire someone for intruding on someone's privacy...
Not ironic at all. I remember seeing a Googler commenting that misusing personal data was one of the most serious violation there.
And it makes sense because of all the data they collect. For now, they have been no significant leaks or incidents of that kind. And people trust them with their data for that reason. A leak would have disastrous effects, with loss of trust and lawsuits costing billions. Considering Google scale, it may even affect politics.
>I know this is an unpopular opinion here, but I personally think that you shouldn't mess with people's shit unless they invite you to
Here's the problem:
Last job I had, I was told I had to open a google account, because the company used google docs.
Mandatory, I was told. Company policy. I passive-aggressively opened one called <company name>_temp, and deleted it when I finished the job, but I wasn't going to risk the job itself by flat out refusing.
Does a company have the legal authority to compel it's employees to open accounts that require third party agreements? Don't know, not a lawyer, probably country specific, but it's not relevant because even if the legal answer is no, you can't start a lawsuit against your own employer.
Now, Google does have a bounty program, but we used dozens of pieces of software at that company from small providers who did not. As luck would have it, none of them were account based information vacuums, but they could have been, and if they had been, I'd have been at their mercy when it came to security. It would have been that or my job.
My unpopular opinion is that the software industry needs way more regulation. We crash-test cars, we should crash-test software. I definitely support impromptu third party pentesting, because it's currently the only way I find out about lazy companies who don't take my security seriously, particularly ones that I am compelled to use.
They sure as hell never call themselves out on it.
> But somehow accounts get unbanned if they get enough attention... so this does not seem to be a problem.
Having 10 highly paid long-tenured engineering employees who can look at small parts of a users account data is clearly better than having 10,000 call center workers be able to access user private data.
The end result is high profile incidents get handled in a way that it would be too risky to do for everyone.
Even with the small pool of engineers, there are incidents[1] where user data is used inappropriately. Would you make this pool larger?
> Google has a huge number of activist (and surely some corruptible) employees, and yet the incidents of users data getting out are very close to zero
Am I reading this wrong, or are you saying that activists would be more likely to leak data? Then I would wonder what kind of activists you have in mind.
Agreed that yes indeed it seems possible to build a security serious company, and that Google is (seems to be) a good example. (Now, there are other things I don't like about Google but I guess that's of topic.)
> How many ways can you think of would there be for your statement to be false?
Not as many as you might think.
The systems at Google may seem incredibly complicated--and they are--but when I worked there, the scenarios where somebody intercepts and exfiltrates data without your knowledge are extreme.
> If someone "higher clearance" than you decided to make you believe the above, but actually retain it somewhere in someway you weren't allowed to see.
The way this data is stored, it is designed so that access to the data is logged and the logs have various alerts / auditing procedures to catch exfiltration attempts. SREs will periodically create user data and try out clever ways of destroying or exfiltrating it to test that these controls work. The Snowden leaks also cast a long shadow over work at Google, and since then, basically, all the traffic and data in storage has been encrypted in ways that make it difficult for state level actors to surreptitiously intercept it. These systems are a bit nightmarish to design, because there are competing legal/compliance reasons why data must be retained or must be purged. For example, certain data must be retained for SOX compliance, data may be flagged as part of an ongoing investigation, data may be selected for deletion for GDPR compliance, etc.
Obviously, it is POSSIBLE that someone is still exfiltrating data, but you have hundreds or thousands of smart engineers who are trying to prevent "insider risk" and "state level actors". People within the company are a big part of the threat model, and agencies like the CIA, Mossad, KGB, etc. are also part of the threat model.
The stack may be complicated, but it's also designed with defense-in-depth to prevent people at lower levels in the stack from subverting controls at higher levels in the stack. For example, people who work on storage systems may be completely unable to decrypt the data that their storage systems contain.
If you're going to get pissy about it, it's obviously true that we are not 100% certain that data is destroyed when we say it is. But this invokes a standard for "knowing" that precludes knowing the truth of any statement which is not an analytic statement.
You don't have to believe, even for a second, if you didn't work with the wipeout systems. That's fine. I'm not trying to convince that wipeout works as intended, because I know that I can't provide the evidence to you.
However, you seem to be arguing that other people don't know that the wipeout systems work--that it's somehow impossible to know.
> I recognize that there are many different kinds of google users. Some folks [...] need maximum security.
(un?)fortunately this is not exactly true. While it's true that some folks do need "extra security", the steps in discussion here are fortunately still applicable for the general population. We as a society have decided (correctly) that leaking your private photos, conversations and data is an unacceptable risk, and punish the companies strongly for it. So companies cannot just make it less secure.
Auth is a complex topic with many gotchas, and there is just no way around it. It's like saying you'd like to drive a car without a license, sure taking the license is "hard", but if you want to drive it's what you've got to do. But only there's a hundred cars actively trying to crash into you and steal your goods.
> I think there is a difference between having your medical data accessed by some Bulgarian cyber mafia and having it accessed by Google.
Not if the Bulgarian cyber mafia hacks Google to steal that data. Okay, Google has super-ultra security and it's never been hacked (a false statement), but even so, how many companies have Google's security? I think you can barely count them on one hand. I completely expect drug cartels, for instance, to hack into organizations holding user data any time they want. If Google is easily allowed access to this data, then a dozen other companies with much weaker security will also be allowed access to it.
>Curious: if anyone thinks it is bad for Google to give the police information, do you think it's bad for Google to have the information in the first place?
Yes. Having such comprehensive information stored in centralized databases on billions of users is extremely dangerous.
It's like when armies in WWI marched in straight rank and file against machine gun fire because they didn't yet know the capabilities of new technology. Though this is much more subtle.
I.e. the OPM hack => black mail on everyone with a US security clearance.
Anyone who can gain access to this information (via hacking, social engineering, or court order) potentially has total control over the identities of their targets.
Absolutely not. Vigilance often keeps allies who might turn on you from turning on you, and -when it fails to do that- it alerts you to their treachery. This is why I asked for evidence of Google's treachery, rather than dismissing the possibility.
In any significant conflict it is very wise to have an idea of who your stalwart allies are, and who you are able to currently rely on.
Any ally can turn on you at any time. That's human nature. However, the man who allows himself no allies is far weaker and far more susceptible to attack than the man who has some.
Like any ally, Google may one day turn on us. For the past several decades, examination of the reports from people working in a variety of positions inside the organization leads us to understand that -despite the fact that Google is a huge advertising company- it realizes that insecure computers and computer systems hurt them at least as much as they hurt everyone else. Those reports also lead us to understand that Google works really hard to ensure that the software that it relies on and the software that it produces is as secure as it can be reasonably made, given the resources Google has available.
If this confuses you, think of it this way: Google makes money by keeping the data that it collects on you (and the analysis of that data) secure and out of the hands of everyone outside of Google. Because they use commodity hardware and software in the regular course of their business, they find themselves testing and fixing issues in software and -sometimes- hardware that we all use.
Because there is no competitive advantage for them to withhold those fixes (indeed, doing so makes everyone safer and keeps them using their computers, which keeps delicious data flowing into Google's robots), Google publishes these fixes on a regular basis.
> I have not seen any evidence that they violate their own policies, even when I worked there a while ago and had internal knowledge.
Whether Google violates their policies today is the wrong question to ask. Nothing about these policies is long-term legally binding for Google and they can be changed on a whim.
While Google includes this language:
> We will not reduce your rights under this Privacy Policy without your explicit consent.
I'm not sure that covers them increasing their own rights to collect, share, and sell data.
Remember - nothing lasts forever. One day Google will be in a financially desperate situation and their investors will demand that they do anything they can to stop the losses. Meanwhile they will have a valuable trove of data on millions of people.
This is not just hypothetical. When Google decided that Google+ was a priority and only real names should be allowed many were forced to de-annonymize formerly anonymous Youtube and Gmail profiles or be removed from the service.
The only real way to assure the security and privacy of data is to not collect it. The only way ensure that the likes of Google/Apple/Facebook won't collect the data is through legislation that gives privacy policies real teeth when they're violated and gives users power to choose to reject changes to these policies in whole or in part.
> Google is the destroyer of privacy. Google tracks their users at an unprecedented level in the whole history of humanity and if they do this, we now have concrete proof that they don't have values that are above their bottom line.
Sure Google stores a lot of data about me, but why should I be worried? They haven't done anything malicious with it. All user data from Google stays on Google, so how is that any different from storing personal photos on Dropbox?
>Really? How do you even know if the Google CEO read your Gmail today? What recourse do you have? None.
This is simply a risk you take with any company you become a customer of. You are willingly give that company certain power over you. Rogue employees will always be able to do things that are harmful.
There are many cases of rogue employees working for Comcast and AT&T who will look up someone's IP address and find their full name and address, and harass them or spread that information. Most of the time, some number of employees need access to information like that, and eventually one of them will end up going rogue or becoming mentally unstable.
> "The funny thing about all this is that (perhaps this case aside) Google is getting a lot of bad PR because they actually tell people what they do."
Not quite, Google isn't being as honest as you're suggesting. They tried to spin the blame off onto a rogue engineer. From the article: "Google has portrayed it as the mistakes of an unauthorized engineer operating on his own and stressed that the data was never used in any Google product."
Although you're right that privacy invasions occur across the entire industry, I think that's even more reason to send the message that privacy is a real concern.
I think is incredibly important. If your information is put at risk due to bad practices by Google/Yahoo/Apple/Facebook/whomever that's a problem to be taken up with the company. If you use insecure passwords and someone is able to access your information that way, then the problem is with your passwords, not with the platform.
>Think harder. Who has the root access to the servers holding the data?
As far as I'm aware, no one. Like I said, from my experience, accessing personal data and user information as an engineer required a lot of red tape and approval from 'the powers that be', and violating those rules would get you fired faster than anything else.
>Could the existing infrastructure and data segregation ever change? How many external checks and balances are in play that can't be manipulated by internal forces (i.e. is there anything stopping Google, or holding Google accountable if their data protection policies change)?
Here I agree with you, probably not (or very little). They obviously have public privacy policies, but you have no proof that they abide by those, and I don't know (and doubt that) they get audited or whatnot to make sure that those policies are followed. Which is why being an employee made me more comfortable. If nothing else, it meant I'd know ;)
reply