Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This thread is a really interesting example of how easily humans can simultaneously hold conflicting beliefs/opinions. I'm gathering that a lot of developers and businesspeople here:

a) are very concerned about collection of their own data

b) derive material value from Crashlytics, Mixpanel and other "tracking tools" for their work

It's tricky to reconcile those two ideas.



view as:

The tracking I do of my users is good. The tracking others do to me, as their user, is bad.

As an avid amateur student of human behaviour, I appreciate your keen eye for this. I actually don't think it's tricky to reconcile the two.

The people conducting the analytics are doing it 'for the right reasons'. The people being analysed don't trust anyone to do anything 'for the right reasons' or without some amount of scope creep around the edges of 'the right reasons'. Self versus someone else. I've got the best intentions, but I don't think anyone else has.

It touches on the theory of: if a company was a person it would have the traits of a psychopath / sociopath. The ends justifies the means. Grow the business, and analytics is the best way to do it, or to visualise the progress and adjust the method.

Human as part of the business: We gotta do these things to give ourselves the greatest chance of success

Human as consumer of app: Why would they possibly need access to THAT?

There are definitely multi-personalities involved. WHich seems increasingly normal. Wasn't there an HN article on that recently?

The brain is a denial-machine. It enables hypocrisy, and I can only theorise that this is due to our inability to survive if that wasn't the case.


Intuitively I'd agree that "best intentions of self vs. others" is probably the mechanism behind this type of cognitive dissonance. What I find harrowing is that in all likelihood, this is true for myself as well in ways that I'm not aware of!

Your point about denial also reminded me of an article a read a couple years ago, attempting to explain why humans deceive themselves. The soft conclusion: we deceive ourselves in order to better deceive others. In this case, we convince ourselves that we have the best intentions in order to convince others of the same. I can't find the exact article, but this is similar and refers to the same research by Robert Trivers. It's an interesting read: https://www.scientificamerican.com/article/living-a-lie-we-d...


I'm a good example of this. I'm an ad guy and a marketer, and as such I'm much more intimately familiar with how this stuff works, what it collects, and how it is used than most. I'm also going to expand what I say beyond just the world of apps, because it is really applicable to anything digital these days, whether it is in an app or on the web, because I hate thinking of those things in silos.

To be a successful marketer in this day and age, proper analytics is a non-negotiable requirement. If I was interviewing somewhere and they told me that I had to market and advertise for an app or website, but that I couldn't track things, that would be the end of that discussion. I would be set up for failure from day one. If anyone wants to make a case otherwise, I'd ask that they share their credentials as someone sufficiently experienced and qualified to make such a case, and how they would go about being successful without that data when pretty much any significant digital (or non-digital to an extent) strategy these days requires that data to measure success and optimize for it.

So there's that piece.

As an end-user, this is often a source of cognitive dissonance for me, and it has grown over the years. 8+ years ago, I had very different feelings when people talked to me about what was being tracked and how it was being used. It was less audience-centric, cross-device/channel tracking was not really a thing yet, and we didn't have anywhere near the aggressive tracking that FB and Google have today (even though some of the first signs of that were showing up publicly perhaps).

Today I'm pretty paranoid about the data out there, who has access to it, and how that data can be used, both for anti-competitive business purposes, as well as more nefarious uses, even if unintended (such as via a data breach). I run noscript at home, and Firefox with uBlock Origin at home and on my phone.

I personally don't have an issue with people collecting usage data for improving the product and their business, but there's a weird gray area for me when they start using that data against me for things like dynamic pricing, dark patterns, selling email hashes to cookie onboarding services for retargeting, etc. I also recognize that while I may not have much to hide, I know I'm pretty lucky in that regard compared to others who may not want to be identified by certain means, and I fully respect and appreciate their desires to remain untracked in that way.

For example, I am pretty upset at how Reddit is moving towards increased tracking and verification as they march towards heavier monetization. That's an example of a community with many people who NEED to remain untracked for safety purposes, and that data, were it to fall into the wrong hands, could prove dangerous for them. Likewise, the simple act of forcing the collection of it could turn them away from such a platform which could indirectly cause them harm (suicidal users seeking help, abuse victims, whistle-blowers, etc.).

So where do I net out with all of this and how do I sleep at night? Well, for my part I do what I can to be sensitive to protecting PII, not collecting data that I'm likely to never need, and really weighing heavily the tradeoffs and risks when I implement something like the Google or FB tracking tags anywhere, and what I may pass into them. I also make an effort to set the record straight and educate people on what I know of tracking, and how to best limit collection if you are concerned, because I think it is something everyone should be educated about so they can make those decisions in an informed matter themselves.

I respect that some people hate my profession and think I'm evil, and I'm never going to win those people over, nor do I really feel the need to. But I'll say that to be competitive with marketing a product or service in this day and age, you dramatically hurt your chances of success if you DON'T have some decent tracking, and so the realities of the situation often dictate what happens in many businesses. My guess is the people who have such black and white views haven't ever been tasked with marketing a product in a true professional capacity, and if they have I'd love to hear their stories and what led them to their views.


Thanks for your input.

I think the best thing you could do here is to set your moral compass and follow it no matter what. That includes speaking up when the line between "market research" and "surveillance" is crossed under your watch. The fact that you aren't a mindless revenue robot (I have worked with them) is a good start.


One cannot follow their moral compass in an abusive industry - they cannot change their environment, the environment changes them.

The only thing OP can do is quit, but let's be honest here... they've been in the industry for a while and they're ok with what's happening.

How do I know that? Tomorrow they'll go to work and they will work on ads and tracking. Always follow what people do, not what they say. Posting on HN is cheap.


True. Thanks for keeping me honest.

The irony is that most people, including me, would be willing to sign up for limited tracking if and only if we had transparency and strong guarantees about data governance, sharing, deletion, and ultimate/final control by the person being tracked. The ads really are better. But greed is greed, and people can't help themselves.


I worked on a growth team so I confirm that you need data, but you can do a lot with agregate data. For example, you can run a campaign and see how that influences high-level metrics like total pagelands (using some kind of first-touch attribution model) and stuff like that. You don't always need individual user data points.

I agree that most of what I care about is typically aggregate anonymous (to me) data.

The nuance here though is that this data is often NOT anonymous or aggregate to the 3rd parties providing this tracking and collecting this data. Google and FB absolutely apply this data to an individual profile level. So while I might only see the anonymous aggregate data, my decision to include their tracking means I am making a decision on behalf of my users/visitors to share that data to parties for whom it will not be anonymous or aggregate, and it isn't a decision I take lightly.

There is obviously legal protection in the form of Privacy Policies and ToS, but I feel there's an implicit social contract here as well.


> I am making a decision on behalf of my users/visitors to share that data to parties for whom it will not be anonymous or aggregate, and it isn't a decision I take lightly.

A suggestion: that decision should be evaluated under the assumption that:

1) data doesn't go away (any data collected or sent to a 3rd party is usually permanent)

2) theft and accidental leaks happen, and

3) we don't know the worst ways data - of any type - can be abused, because those techniques haven't been invented yet (powerful analysis techniques are being invented at an incredible rate).

The combination of these properties means that collecting and storing data creates unbounded risk. At any point in the future someone might invent a truly horrific way to abuse the stored data that was collected perhaps decades earlier.

Humans are used to information being transient. Information decayed over time as memories were forgotten, paper/parchment/etc decayed over time. Books had to be copied to they risked being lost forever when the library burned. Claude Shannon's digital signals fundamentally changed all of that as they made it possible to automatically preserve information perfectly. Unfortunately, human intuition hasn't caught up to the idea of permanent data.

The question "Should I trust $THIRD_PARTY with this data?" misses the full nuance of what is actually being risks. A better question is "Should we trust $THIRD_PARTY and anybody who buys/steals/subpoenas/etc it from $THIRD_PARTY with this data? What if they have analysis capabilities far more advanced than current techniques?".


> My guess is the people who have such black and white views haven't ever been tasked with marketing a product in a true professional capacity

This is so, so true and in such a broad way. I've had this thought gnawing at me for ages that these people who are doing seemingly evil things (think James Comey, Ajit Pai) have likely been tested in ways I never have. Who knows how I would handle being in their situation? I don't think it excuses or absolves true wrongdoing, but it does give me some sympathy or at least reasonable doubt of malice. This an elementary concept, but it never seems to get mentioned.

To be clear, I'm not one of the people who categorically hates your profession. I'm an advocate of privacy, I avoid most types of social media, use uBlock, etc. But I can see it being hard to figure out where the line is, and how to not cross it. I've never worked in the consumer/media/ads world and had to face the "what to track" dilemma head-on.


You can turn that argument on its head: you've never worked in consumer/media/ads because it's an abusive industry which self-selects for certain types of individuals that don't care about the privacy of others.

Nowadays they notice the public's displeasure and feel obliged to pay some lip-service, right before going back to abusing the trust of their customers.

I understand it's hard for marketing professionals to resist with so much pressure coming at them. That's why I think they should be supported by laws forbidding their abusive practices. Then it should be much easier to say no. Bonus: shostack wouldn't be bothered by all that cognitive dissonance.


If you really cared about privacy you would have quit your job and would have found a decent way of making a living. You’re just making excuses and you’re trying to elicit sympathy.

It costs next to nothing to add a toggle in an app which disables analytics. If the company’s too lame to have an on-boarding screen where they ask the user for permission, they could even hide it in the settings.

And yet almost no companies do any of the above, because the only thing they care about is money. Crushing these abusive marketing efforts with regulation is the only workable solution, we've already seen what the industry's best effort looks like with "Do Not Track".


I've seen apps that asked me to submit a crash report. That seems like a better solution than monitoring all the data all the time.

But I don't think it is REALLY about improving the app.


Everyone has bills to pay, I understand. You're not doing anything really evil like joining the SS.

Legal | privacy