Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That very much depends on the network and the channels how well things work or not. And other than having no moderators/channel operators at all, I don't see how it can be a problem that can even be solved? People can always get drunk on power and go completely crazy. What would be your proposal to solve this?


sort by: page size:

For sure, any network without any moderation is going to quickly fall into chaos. I'm working on a post on decentralized personalized moderation at the moment which should be out in the next day or two :)

Generally, channels are independently moderated. The moderation is done by someone in the channel, often just the person who started the channel.

The distributed moderation works well with large networks, because the people doing the moderation are in the channel reading it and keeping up with the conversation.

However, because the bar for becoming a channel operator is low, it's not a good idea to give full ban access to them.


Yeah this is what I mean that moderation isn't "solved", and maybe it can't be. I personally think that it's an issue of having so many big companies that try to control what gets shown. Ideally it should be up to the local communities what they want to see and not see, which is something a decentralized system can offer.

Indeed, the question is what should be done, or how should it have been done differently? Should YT have grown more slowly?

FWIW, I don't think money is the limiting factor here. If there was a known good solution, but it took $10B to implement, I think it would be done.

You could pay X thousands (more) moderators, but there are drawbacks in a human solution as well. First is the fact since there's human judgement, rules will be applied inconsistently, and probably with some bias.

Second is the fact that moderation is a very mentally exhausting effort. You can observe some of the cyclical nature of this balance from the news over the past few years. Article about moderators with PTSD? Well, automation is the solution. Automation fails in a lot of edge cases? Well, why not hire armies of people to do it for you?

I don't think there is an obvious solution here, no matter how much it cost.


Hi, thanks for the question. This is of course a headache for me, when I started I just worked on the concept that I thought would be fun to have for online entertainment. Obviously as a single person I can't moderate everything so even if I would like to do strong centralized moderation it's not really possible. Also I believe in free speech and that's by definition means allowing people to express their opinion even if you don't like it. Moderation will be achieved by the owners and moderators of each channel. You can also hide posts or all posts by a particular user yourself. I also do have some tools to ban IP and users from the whole site or delete all of their posts in case of spam or posting illegal content. I hope that will be enough. Right now it's not really a worry because there are no users so I will focus on that :)

In my opinion, the best solution is human moderators. The untrustworthy collective is too unreliable.

Impact from moderation always goes both ways. The more you insert yourself, the higher the chances that your own stupidity warps the entire communication channel for the worse leading to shrinkage and echo chambers. And people dont vanish, chances are they are already attempting the next tower in babel a few ips over and get up to who knows what.

The merits of proof of work should be discussed for the specific scenario. If it allows for reputationless discussions and throwaway accounts, how high is the cost really? In comparison to banevasion problems, moderation overhead and the resulting attack surface requiring more resources while deteriorating the channel? Otherwise impact less emergency breaks for idiots might be reasonable solution. Its not much different from timed bans.


A large issue I see with this is that of moderation. Discord, Slack, Reddit, and so on, all have appointed moderators for every community that keep users in-check according to community standards. With this idea, there are no 'owners' of any of these chatrooms, so are you going to moderate it centrally? That is a lot of work and requires a lot of people, just think about how much work it is to moderate a single twitch chatroom for spam, let alone an unlimited amount of them.

Would love to hear if you have any ideas on solving that. Besides that, I like the idea.


Central moderation is a deeply flawed model, that always ends in everyone being unhappy with a platform.

When was the last time we said the Internet Protocol, routers and hubs worldwide, need "sex workers and black groups". Sounds like nonsense, because it's just a platform, a technology.

Well this is how social networks must also be. And a social network should allow multiple communities to form and each community to moderate itself. There's no other viable way I can think of.

The platform should only be invoked for dire circumstances when the platform integrity is being violated itself.


The problem is the costs of content moderation are not linear. You are not dealing with a few thousand trolls. You're dealing with bot farms impersonating possibly over a million accounts. Huge groups of networks operated by just a few dozen people.

Automating that away is the only path to being on equal footing. If you introduce any human element, not only will it be a bottleneck, but the cost could be large enough to bankrupt even the largest companies.


anyone who has done moderation for any active forum can tell you how nightmarish it can be. the more popular the platform the more problems you can have.

anecdotal, friend who streams on twitch also mods for a few streamers and they even have issues when a stream is in subscriber mode only. simply because anonymity and distance from those being affected empower people to do bad things

edit : I am really surprised there is/are not companies which would spring up to provide these services seeing how most of these activities are required by law.

however before bemoaning what they are paid, just go look at your local 911 operators who are government employees. just because we think it should be paid more doesn't mean others do.


A version of NNTP where groups were automatically moderated and strong controls were in place to weed out spammers and trolls could work. The moderation would have to be baked into the core of the protocol and not some weird half measure bolted onto the side however. Once a community is large enough you either moderate it or you get another /b/.

I've been thinking hard about decentralized content moderation, especially around chatrooms, for years. More specifically because I'm building a large, chatroom-like service for TV.

I think it's evident from Facebook, Twitter, et al that human moderation of very dynamic situations is incredibly hard, maybe even impossible.

I've been brewing up strategies of letting the community itself moderate because a machine really cannot "see" what content is good or bad, re: context.

While I think that community moderation will inevitably lead to bubbles, it's a better and more organic tradeoff than letting a centralized service dictate what is and isn't "good".


I believe all platforms do moderation of some sort. It sounds like what you are proposing is that nobody be allowed to do any moderation. How do you think that would work?

A network effect won't protect them when they eventually screw up and start banning their own star content creators. If they haven't already. Their power of moderation can only be exercised if thought leaders agrees they are good moderators.

Network effects aren't magic. If an opinion is banned on Twitter then the network that supports it will have to form somewhere else.


Between bad actors taking advantage of DMCA takedowns and brigading mobs, it seems that automated (or semi-automated) moderating is absolutely a no-go at scale. Incidentally, it's also why these networks (YouTube/Facebook/Twitter/etc.) can achieve such a massive scale with basically zero human moderation (i.e. cheaply). Remember when we had forums and there were a few unsung heroes that kept our communities clean and tidy(-ish)? Even here, I'm pretty sure @dang is basically reading HN and cleaning it up all day.

Doing this at scale with semi-automated systems is quite literally impossible but until the courts start sanctioning social networks for their ad-hoc censorship (if this is even enforceable), we are sadly unlikely to see a change.


I don't think you can make that assertion that easily. For example, would it be true if there was 1 capable moderator per user on the network?

well, one way forward is to have the moderator's role more clearly defined. This is a society that no one runs in essence. Thus, the good way to deal with it would be a little bit of tech as I described above and some more moderating...

I think it's a good idea as long someone moderates (maybe PG) moderates who all can do it.
next

Legal | privacy