And if we embrace the fact that this quickly becomes more and more Kafkaesque, and run with it?
Company A has the statement on their front page, maybe somewhere unobtrusive like the footer: "We received one or more NSLs today for customer data"
Now, should it ever become a true statement what then? Leaving it would be a crime. Removing it might also be a crime. What is poor Company A to do, Judge? We're just trying to comply with this law...
Preach! What we ultimately need is just a law that says it is illegal for companies to collect or store any data for marketing purposes, tracking, or resale. No opt ins, no exceptions. Unfortunately we can't fully kill targeted or unsolicited advertising at the root because of freedom of speech issues, but we can eliminate all the data it depends on.
I hadn't read the law yet, thanks for the link, but I don't think that solves any problems at all and has potential for plenty of issues. The devil is in the details and the people already have the power to only use services that allow data exporting.
You're attempting to force companies to behave in a pro-social manner but if that company never wanted to behave in a pro-social manner we'll have just given them another attack surface with their lobbyists to use to kill their competitors.
I'll withhold judgement until I see how this plays out, it could end up being a great thing, the issue with laws isn't that they can't help - the issue is that laws that end up hurting almost never go away.
What I'd like to see come out of this is for corporations to view customers' collected data as a liability, not as an asset. Then maybe they would think twice before collecting and storing unnecessary customer data. Or if they have to collect it, they would expire it as soon as possible. This is somewhat hampered of course by regulations that may require hanging on to data.
I am no longer convinced that even that is enough when the relationship is not even close to symmetric. Network effects, psychological manipulation via ads, sunk cost fallacies, bait and switch etc. all mean that companies have so many options to pressure users to agree to terms that they don't really want to. Yes, if everyone acted rationally that wouldn't work. But people are not rational beings. And you certainly don't can't make others whose choices influency you act rationally.
So ultimately when it comes to company-user relationships nothing short of making some rights including privacy inalienable will prevent this shit. That's why we need laws like the GDPR and similar ones in other regions. Because corporations have shown again and again that if there is a profit to be made of abusing users they will find a way to get away with it.
Ignoring the enormous argument over whether what I am about to say is legal in the US/America's particularly constructed law...
Is it time to consider whether companies who's primary business is the collection, summation, synthesis, and re purposing of information should be defacto illegal?
Its not that having the government do this is meaningfully better, but I'm convinced that is a better overall path. The public seem more afraid and more willing to engage in a more serious discussion over ethics. At the very least, it would enable the public to have more oversight over how such data is collected, stored, embargoed (or not), and used. The level of abstraction and fuzzing that is allowed by allowing a non-governmental entity like Palantir or Facebook to have (whether as a collector or aggregator) the data and governments be only a user seems too difficult to regulate and even have a transparent discussion about because claims of national security and secrecy are nested within claims of trade secret and intellectual property in ways that create a moat around effective oversight.
So seriously...if we made this business model illegal, what do we lose? What exceptions would need to exist?
I agree, but I can't imagine a world where the internet works any different than currently in your scenario.
Eg, right now (or at least a few years ago) companies could basically do anything with your data. And they did. It's getting worse too, with advanced techniques on identifying individuals across website bounds, etc.
That is what is spawning these sorts of debates, laws, etc. So my question to you is while I agree that we have to assume malice (for ease of discussion), we can't actively allow or encourage malice right? So if we do nothing, do we just accept that they do who knows what with our data?
Ie, I think we mostly agree that what is going on right (with our data) now is bad. So don't we have to do something? What do you see as the right solution?
I think there needs to be some kind of "responsible and clear disclosure" laws that require companies to very clearly and overtly disclose what data they're collecting and how they're using it.
Some kind of standardized "label" (something like the standardized nutritional facts on food products) that is easy for consumers to read and comprehend, not buried in pages of paperwork, without needing to read a 20 page TOS / Privacy Policy.
The legal problem is everyone is "consenting" to data collection by accepting a TOS which protects companies and makes it "legally acceptable". The real problem that needs solving is consumers are rarely aware of what they're consenting to. Companies might not hoover up and sell as much data if they were required to clearly tell everyone they're doing it.
Basically, let's get rid of this idea that agreeing to a 20 page TOS / Privacy Policy is legally binding when < 1% of people actually read what they're agreeing to.
Nope – not explicitly, that would be something within _their_ terms of service since it's their data. Their own customers would have to hold them accountable.
I know that's a cop-out – but I can't imagine a system where we could enforce some kind of downstream compliance.
Things like GDPR are a good way to make companies accountable and I look forward to that becoming more broadly accepted.
I don't think lawmakers have thought through the ramifications. Here are a few:
Way too hard to enforce, the definition of 'customer data' is going to be a constantly moving target. Does every click count? How about aggregated clicks important for general product optimization?
What constitutes 'selling' user data? Very few companies actually sell your data, instead they place ads based on your data. Will that be banned as well? Many companies, including Google would have to significantly change their pricing model if so.. yet that is apparently illegal.
The solution to this problem, of course, would be data protection laws with actual teeth:
Make it a crime to collect, store or distribute personal data without the written, explicit consent of those of whom you collect said data in so far as this data is not absolutely necessary for the expected conduct of business (i.e. billing information when you buy something) and give people the right to a listing of data stored about them and to require the deletion of said data at any time. Then jail the CEO of the next company to violate said law and auction it off to the highest bidder. Boy would people look into only collecting necessary data and respecting their users’ privacy.
As an added bonus, all the crappy business ideas that only work because of targeted advertising would go downhill, too, ridding the net of gossip news, irrelevant apps and whatever else one doesn’t like :)
Maybe we should then pass laws restricting what companies can do with our data? What kinds of algorithms they can run if they want to be treated as a platform rather than a publisher?
This is exactly the kind of things that the Obama Administration's NSTIC policy aims to create. I feel like a number of companies (Google, Facebook, your bank, specifically) already have plans to do this.
Looking forward to the inevitable "Kagi sells user data" investigative report. The simple truth is that unless they make themselves explicitly liable to users if they are ever caught doing this, that the irresistable urge to monetize user data will tempt even the most well-meaning firm into sin. The ONLY solution is to make 'sinning' an existential threat to the firm. IANAL but I believe this can be done if ownership agrees to voluntarily enter into something like a fiduciary arrangement with their users. This means writing a EULA that does not minimize the firm's risk, but instead increases it in specific, meaningful ways. Increasing client risk is something attorneys are absolutely allergic to, and will argue up and down about why the client shouldn't do it. I would say that, if the customers know and care about such a step, it could be a valuable PR and marketing move that demonstrates REAL integrity.
As it is, we only believe that "users paying for services" will protect against data exfiltration because of very naive reasoning, or, perhaps more accurately, praying.
It strikes me that a company like Kagi should be able to craft a legally enforcable agreement with its customers which expressly forbids the company from selling ads and conducting surveillance.
The agreement could be carefully written by a skilled lawyer to define the things Kagi cannot do, the proof customers must present in order to proceed with a valid lawsuit, and even the maximum damages that the customer may sue for.
In that case, if Kagi was found at some point to be using customer data for these purposes, it could be sued very easily and by many parties.
People are calling for regulation for data privacy. In the meantime, Kagi can create its own regulation it will hold itself to for the benefit of its users, can it not?
I am inclined to agree, but I would argue that most businesses do not aim to preserve the rights of their customers (think the data collection performed by Google and Facebook and 100's of ad agencies on the Internet) and I think that this thinking can be quite dangerous in a political context.
I like that - but how would you enforce it? Wouldn't it actually just deter companies interested in acquiring out of interest in the acquisition's customer data?
This is becoming a real problem really fast. Resumes are automatically trashed because of keywords, employers sharing histories and data of previous workers is just around the corner, credit decisions, insurance costs and access to certain services will be denied based on opaque (and often wrong) ML algorithms, personalized price gouging in web stores is in the making, and who knows what's next.
What we need is a law preventing companies from treating customers differently unless there's a strong reason (ie pregnent women, age restriction) and they specify exactly why. We also need to ban companies that collect and trade personal data. Of course that's a mammoth task and the easiest way to achieve the goal is to create technological interlocks preventing leakage of personal data while busting the biggest data trading companies.
reply