> Someone needs to solve online identity and reputation (in a privacy-preserving way) so that you can accumulate trust on one site/service and carry that over to another.
The problem is that most of the existing ways of transferring reputation (or the appearance of it) between contexts result in opportunities for arbitrage: celebrity endorsements, scientists supporting theories outside of their field, con artists leveraging social proof escalation, phishing, etc.
We've seen some of the strictest mechanisms of reputational transference leveraged for illicit purposes:
Everything that makes reputation and trust transfers useful and convenient for users creates a huge attractive nuisance for illicit actors, from state level APTs on down to 419 scammers and everyone in between.
Instead of transmitting obviously fake data, record the genuine data of other users (randomly and anonymously, of course) and retransmit it. It poisons the association while being much more difficult to fuzz out.
The proposed "solution" would only exacerbate the problem. The author basically proposed that the cheaters can not only put articles on the front page, but they're rewarded by also increasing each others' trust scores. Why is that good?
There are tons of solutions to this, largely in the domain of statistics and machine learning. They're flawed, but communication is not a precise beast.
I think we'll end up moving away from shallow signifiers of trustworthiness to reputational networks and evidence someone has invested into an identity or entity.
That's not to say it's a solved problem, even in the real world with thousands of years of battle tested strategies.
A very simple example would be a web browser where I could blacklist chronically-unhelpful sites, and share that metadata among friends.
You could have anonymity without losing data. Assign everybody a random id. Do whatever data mining you need to, based on id number, to see how credible this person's opinion is.
I think this system can work ... if it's not abused. Intelligent analysis of the data will keep it from being abused.
You may actually be able to find inverse correlations among some of your more aggressive aspiring manipulators, where a down-vote from them actually ends up being a vote of support.
Let them do their worst. Statistics will out them, eventually.
As for the ethics of it. If the system is smart enough to avoid abuse, I think it has merit. When is ignorance ever the right answer? If there's data to be had as to whether you have, as an old boss of mine used to say "a leak in your payroll", why is it evil to attempt to determine that?
I really think that this could work. It reminds me of a "Web of Trust" system, where you have a few known trustworthy individuals, and the reputation propagates out from them.
I wonder if some algorithm using ratio of giving and receiving +1s/circles/posts/etc could be used to curb this kind of practice, by converting +1s of legitimate users into +2s, +3s, and so on.
By punishing their connections when they misbehave.
Or at least, that's how you make it not matter. If spam-bot hands out reputation to spim-bot and then you get a bunch of garbage from spim-bot, derate everything coming from anyone that has given either of them reputation (and perhaps ignore their attestations also).
This is disgusting. A younger version of myself fell for this nonsense - its embarrassing to admit, but I think its worthwhile to point out that there are ordinary people that get caught up in it.
This kind of thing really breaks down trust in online communications, which I suppose in hindsight means I should have been less trusting in the first place.
I'd like to think real name policies (like Facebook's) would deal with this sort of thing, but clearly it doesn't, and causes a whole host of other problems. What's the solution here?
I already do this. Block and ignore sources that have proven to be untrustworthy. There should be a mechanism to share this but I am pretty such that the more powerful would try to eliminate it.
I just wish we would implement a system where users vouch for each other in order to use the platform. A sort of web-of-trust to stamp out (or at least temporarily punish) whole areas of the social graph that are being used for manipulation and abuse.
I doubt that it is a problem yet but if I were i charge of building a solution to said problem I would try to build a distributed trust system where bad nodes could be flagged and that flag spread to the rest of the network. Those that trust your node would lower the trust ranking of the flagged nodes the more flags against them the lower the ranking the rouge node would get.
My gut is it transfers easily. Not clear on what to do about it.
reply