Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

One of the reasons people use social media is to get a feel for the generally environment of opinion, even if we disagree with it. We want to know what "people are saying" and understand the rapidly-changing boundaries of what is socially acceptable. That's why I follow people on Twitter and Facebook that I don't trust at all. I would not want to use a decentralized trust system to consign myself to an echo-chamber, even if it is full of believable truthful people.

But the system TimJRobinson describes is flexible: you don't have to simply filter out the less trustworthy posts. You could simply flag them somehow as low-trust.

Right now the major social sites are designed to amplify the voices of users that produce content that drives engagement, even though the most engaging content tends to be offensive or inaccurate. That's why the people I least trust on Facebook always show up on the top of my feed: I sometimes engage with them by telling them I think their posts are inaccurate.

So I can see your system being used not just as a filter for what the users sees in their feed, but as a feedback mechanism for people's posts.

I think a more responsible social site would optimize for positive outcomes, and not just engagement: employing algorithms and techniques to optimize for accuracy, quality, and civility. I think decentralized moderation could be one of those techniques.



sort by: page size:

People using the system to say things that I don't like, won't be "solved" by letting everyone have the system help them find whoever they tell it they trust.

Using a trust / reputation system to prevent wrongthink means it has to be a centralized reputation system, where anyone too far from the median (or who the Cabal doesn't like, if you design it to not trust the general userbase) gets shut down.

Also, why would a decentralized system be able to have a stronger effect that the efforts Twitter and Facebook are currently taking to flag or block wrongthink? If those efforts are "half measures", what sort of "full measure" would this reputation system impose? And if it's really distributed, how would that be imposed?


Imagine social networks where you couldn't just make 100 bots. Through some low friction mechanism the people that you trusted most would be identified to you as such and by extension the people they most trusted would be visually indicated to you. Settings would allow you to completely filter out people that aren't within a certain degree of trust.

Right now adversaries attempt to warp what people believe by pushing them to make connections to more extreme elements, this would be less possible if the social network were built in such a way that would expose fake or low trust profiles.

The web of trust would resemble Page Rank, more than modern day Facebook. For example, Bill Kristol may be trusted by people like Ezra Klein to such a high degree as to not just bring him within your bubble, but to be part of a social network feed that would provide a blend of opinions.

I'm not saying this is easy to get right, but it's necessary if we want to have open platforms like Twitter to survive gamification by nation-states and brand marketers.


Maybe these social networks could be manage by the community. A bit like HN. Here if you post crap, people can downvote your crap to the bottom so that no one can see it.

If you want to be seen and red, you need to post quality and gain reputation. People with bad reputation will slowly lose visibility.

It might not work, but clearly it seems that quality is a prime factor on HN and it's one of the reason why I trust more HN than any other social network.


This would become hard to manage as you expand to users you “trust” but don’t agree with. Or users that have extremely good views in one topic they spend most of their time talking about, and the few rare posts about other topics that express extremely bad views. Users with high trust networks also become a target for spam and low brow accounts: anything they get to engage with will be boosted exponentially.

I think this comes back to the “focus on bad behaviors, not bad actors” discussion, but in reverse: focusing on “good actors” (“trusted” users) assumes people have fixed patterns, when there’s a lot of nuance and variability to be taken into account.


I'm starting out on developing my first social web app, and the question of trust and credibility keeps coming up in my mind. My web app would be worthless without allowing users to input content into the system, but giving users unrestricted ability to input content would quickly fill the site with noise. I need to have a good way to identify not only spammers and users who might try to game the system, but users who may inadvertently put in noise/bad data. Likewise, I also want both the system and other users to be able to implicitly and explicitly recognize those users that should be trusted, based on the value of the content they have submitted.

I have seen the following approaches:

- voting system (e.g. Digg, HackerNews), wherein the trust is at the content item level, and the can be influenced by positive or negative voting - once the net voting passes a certain positive threshold, the content is made more visible to the community.

- feedback system (e.g. eBay), wherein the trust is at the user level and can be influenced by other users

- feature unlocking (e.g. StackOverflow, HackerNews), wherein users need to meet certain usage goals in the system in order to have access to features that are deemed to require higher trust

- human review, wherein content is submitted for approval and reviewed by an admin/maintainer user. This approach certainly doesn't scale, but may be useful in initially "seeding the system".

My post is short on details, but I'd like to keep the discussion open as possible within the domain of trust in a social app.

I'm looking for suggest readings, anecdotes, experiences, etc. What has worked in your app? What exactly was the problem of "trust" in your social application, and how did you solve it?


I've considered this several times before, with similar concerns. The solution that seems like it would mitigate these problems is to have contextual social ratings, rather than global ratings. If I indicate the people that I trust, and transitively trust, then I can see indications of misbehaviour from my social network, rather than "some abusive asshole is rating people poorly for not indulging him enough".

Unfortunately, this reminds me of PGP's web of trust, and that never really took off. It may be that it failed entirely due to other issues (pgp's terrible UI, encryption is hard and nobody cares), but it's not a great sign.


I just wish we would implement a system where users vouch for each other in order to use the platform. A sort of web-of-trust to stamp out (or at least temporarily punish) whole areas of the social graph that are being used for manipulation and abuse.

While I agree with most of the comments that it can be easily abused, if the bots can be kicked out, this (trying to get rid of echo chambers and highlighting posts from people with different viewpoints recognized by the algorithm) would be a better way for all social media to work than what we have now.

In the current situation (showing most liked posts) the only common ground is good looking people (TikTok/Instagram/Youtube shorts) and most outraging posts with lies to get more likes (Twitter/Youtube)


I think the main thing lacking on modern platforms is a quick, reliable way to verify the reputation of any given content creator.

First off, it obviously needs to be relative reputation, and not centralizedI think some sort of cross platform web of trust, where you publicly endorse some friends, some content creators, and some investigative journalists, etc. You also publicly repudiate sources of information that you don't find credible. This means that if anyone goes off the rails there's a visible trail of distrust, and that information is de-prioritized in your network.

This also has a bit of sybil resistance built in, because upvotes or downvotes of thousands of bots isn't relevant at all to your trust graph unless friend of friends actually endorse some of these bots or something. Still though, it would probably also be good to have some burnable staking mechanic. Everyone needs to put in a dollar, or 10 dollars, or something to participate (which could be removed at any time), and people who clearly violate terms of use would have their stake burned, and all of their endorsements or repudiations would be invalidated.

I feel like eventually someone is going to have to build something like this. Incentives are hard to get right, but if by some miracle it works, and the trust graph takes off, then lots of these problems just go away. Imagine something like DNS that you could just query to see how much you should trust some chunk of information. It would be a game changer.


IMO it boils down to how to give uses a 'reputation score' in a fair way that lets in new users, but detects 'rude people' so that their negativity can be filtered out. It's the most challenging problem in social media today, I think, because there are so many jerks out there making life miserable for the 'good people'. A workable approach may be using Facebook logins, so that people can't as easily hide behind anonymity, but then again I hate forcing users to be on Facebook just to use some other thing. Perhaps there is a business model here (business to be made), in a company that can validate that this is a 'real' person, eliminate anonymity, and perhaps be able to 'shame people into being nice online'. So that aspect of it of course comes down to how to verify true identity online.

Another option is a trust network. We already have that within the social networks but not across them. I also don't mind trusting a bot that has a cohesive brand across the networks - there's no need for it to be human as long as I can untrust it.

I've been thinking that the only way to get around the bad-actor (or paid agent) problem when dealing with online networks is to have some sort of distributed trust mechanism.

I feel like manually curated information is the way to go, you just have to find some way to filter out all the useless info and marketing/propaganda. You can't crowd source it because it opens up avenues for gaming the system.

The only solution I can think of is some sort of transitive trust metric that's used to filter what's presented to you. If something gets by that shouldn't have (bad info/poor quality), you update the weights in the trust network that led to that action so they are less likely to give you that in the future. I never got around to working through the math on this, however.


Yes, I've also been thinking about this a lot, and reached similar conclusions.

One of the main issues I can't think my way around is privacy. I don't want my full and true trust network to be publicly available, although I might provide a more detailed map to my trusted peers. I might also "lie" about a given weight depending on who asks, for diplomatic purposes (perhaps you have a close friend who also happens to be rather gullible).

A working system is going to take a lot of effort on behalf of each user to correctly and accurately annotate their own trust graphs, on an ongoing basis - perhaps this just too impractical.

I performed a slightly unethical experiment many years ago, In which I created an entirely fake Facebook account (back when people actually used Facebook), and slowly sent out friend requests to personal acquaintances. All it takes is one or two to get started, and every subsequent person sees "n mutual friends" and is more likely to accept the random request. It snowballed from there, and eventually I had "infiltrated" a non-trivial portion of my own social network with an entirely fictional persona.

(Ethics note: I only sent friend requests to people I was already friends with on my real account - so I wasn't obtaining any new private information I wouldn't otherwise have had access to - and I didn't perform any interactions beyond sending friend requests)

Any kind of trust network is going to need to deal with this sort of infiltration - and I'm not sure how.

And another thing - trust is bought and sold all the time. Social media influencers sell a small fragment of their trust level every time they do a paid endorsement. If there's some kind of explicit trust network, people will pay others to obtain a higher trust level. Is there anything we can do about that?


for me that would be completely reasonable, but it isn't necessary, you can have many decentralized filters like automatically block people that are blocked by a % of people I follow and many similar mechanisms

I've thought about this a lot. Currently, my preferred solution to the problem of Sybil attacks in decentralized social networks is a reputation system based on a meritocratic web of trust.

Basically it would work something like this: By default, clients hide content (comments, submissions, votes, etc) created by new identities, treating it as untrusted (possible spam/abusive/malicious content) unless another identity with a good reputation vouches for it. (Either by vouching for the content directly, or vouching for the identity that submitted it.) Upvoting a piece of content vouches for it, and increases your identity's trust in the content's submitter. Flagging a piece of content distrusts it and decreases your identity's trust in the content's submitter (possibly by a large amount depending on the flag type), and in other identities that vouched for that content. Previously unseen identities are assigned a reputation based on how much other identities you trust (and they identities they trust, etc.) trust or distrust that unseen identity.

The advantage of this system is that it not only prevents sibyl attacks, but also doubles as a form of fully decentralized community-driven moderation.

That's the general idea anyway. The exact details of how a system like that would work probably need a lot of fleshing out and real-world testing in order to make them work effectively.


I agree. A proper solution should probably work with the notion of a trust network, i.e. an extension of a social graph.

Well, you need to define your objectives. No system is robust to a failure of all its actors. If every user (and even developer) is ill-intended, no system will give good results. So we need some "hopeful" (and accurate) assumptions.

One might be that the typical user can sensibly elect a few individuals to trust -- it could be developers (which are a natural choice for trust), to activist and publicly visible individuals (even close friends). Then presumably you could adopt his trust model (such individuals could be roots in independent conservative trust webs/graphs). I think a very large number of such webs might be computationally expensive, but hopefully you'd be able to find someone you trust or start your own independent graph (if you trust no one, you'd effectively lose all anti-SEO measures I guess). This very naturally leads to a decentralized reputation system!


Your idea only works for tiny networks and doesn’t scale.

Enjoy your completely unfiltered social network. It’s not my problem if you can’t see any benefit whatsoever to any kind of trust filtering.


Even with some defects or imperfections anything is better than what we have now - which is basically nothing.

I think the way I'd think about this is to imagine say a small community, such as a town of say 5000 people or so. While you cannot know each person individually, you can know of people by reputation. People do earn rep over time, and they can burn rep. It is true that some people will be unfairly downscored, or unfairly upscored - but I'm not really trying to argue for those fine grained situations. What I'm trying to argue for is simply distinguishing the very bad actors acting out of pure malice from injecting fake news, media and 'yellow journalism' into human conversations.

True some real people will be downscored (I prefer to think of this as downscoring bad actors rather than 'blocking'). And true an AI can 'sound very human' - but an AI or a bad actor will struggle to build up a reputation over time. An AI can't shake hands with you, it is harder for it to prove it is human... Other bad actors will presumably burn their reputations if they spit out a series of offensive, misleading, false, inflammatory or toxic posts...

Note I am not necessarily advocating for crypto per se as a way to establish social trust graphs (a'la PGP or say Keybase) but I am arguing that there are other options that the OP did not raise. I more want to see a wider discussion around ways to filter malicious media that either "centralized systems" OR "small social clubs". I'm not necessarily saying it has to be a cryptographic solution... but I do think there are more ways to have what we want.

next

Legal | privacy