1. It wouldn’t be 5 minutes. This could easily take hours to resolve.
2. Even with human support, companies like this would continue to be incentivised not to share what exactly triggered the ban in order to protect their systems from spam and abuse. You can’t provide services to billions of people without automation.
3. Humans reviews would be performed by humans. You’re replacing an imperfect system with another imperfect system except now things are slower and cost more.
Fully agreed - I wasn't suggesting they review every manual decision, but do it as part of a proper appeals process. Google, for instance, is likely seeing abuse on a scale where they have to have automated bans - for everyone's benefit! - and there's nothing fundamentally wrong with that as long as you can escalate to a real human.
Edited the comment to reflect this - thanks!
I worked on an abuse prevention system in the past and know the challenges very well, except my company actually put in the effort to respond to every appeal and compensate affected customers for their troubles.
Yes, humans are expensive, and spammers will try to game the appeals process, too - but it's simply a cost of doing business.
Similar to arguments about getting human support from Google being impossible... There is a very simple solution: allow users to pay a modest amount, say €25, to get 10 minutes of real human support. This will immediately rule out people spamming at scale. The other side of this equation, is that if the ban is upheld, they must justify it in human terms, beyond : "the ML said it's bad". A positive judgement could then then be used to train the ML to be more lenient on this user going forward, at least when it's a borderline case
If you're banning thousands of people, deleting thousands of tweets and updating thousands of people as routine (customer support), it might take time for someone to review what you're doing.
Mmm, well, there you're changing your argument on me ^_^;
Okay, let's talk about that instead. Problems: False positives, (from whose perspective?) low staff to customer ratio.
Potential solutions:
Recruit volunteer staff
Pros: More staff.
Cons: Some of them are going to be evil.
Let users report people they don't want to hear from (as you point out, this is how it will work)
Pros: Reduces staff tasks
Cons: Again, might be used for evil
So, essentially what we seem to be looking at with the above problems is a trust issue. If you give people binary access to total power, even on a probabilistic basis as per the report function, then it's going to be abuse.
Potential solutions:
Give people more limited forms of power than banning and not banning.
Pros: - varies
Cons: Less deterrents?
How might we do that?
Potential solutions: Let people ban others from their accounts.
Pros: No longer have to worry about people who just don't like someone banning them.
Cons: Loses a lot of the social deterrent effect, people who joke about rape probably don't care about continuing to talk to the person they're attacking anyway.
Cons: Doesn't let you network with people who are likely to share your values, so you'll get exposed to attacks anyway.
(This seems to be the current state of affairs - I don't really use twitter so I don't know, but suggestions on pages seem to imply it.)
If we solve the second problem there, the first one becomes less of an issue. At this point it starts to look like a networking and evidence problem. If we have ban groups - that you can join or unjoin as you please, to make abuse less likely, and have each individual user's decisions absolutely override any group level decision for their account.
Okay, what are the potential problems with that?
How would you vote?
If someone's banned from one person's account gets banned from all of them, then as the group increases the power of any individual within that group will increase.
Require more than one person to make a decision to get rid of someone?
How do we stop groups of thugs just voting to shut someone up?
Choose a representative sample? Say by forwarding the reported post to three people within the group and having them all sign off on it.
Downside is you duplicate work - upside is you reduce the potential for them all to be evil - and if the group itself is evil then it's not a group that people would want to be part of anyway.
How do you keep trolls out? If the group consistently votes against your reports, then your reports stop being received?
==========================
I don't know, admittedly this isn't my area and this is like ten minutes thought. But it doesn't seem to me like an absolutely unsolvable problem as much as it seems a bit tricky.
I don't think you can suggest a company treats people's accounts with a lot of respect when they allow automated processes to bulk ban them by the hundreds, and offer no appeal process other than making a PR nightmare for them on blog sites.
1. Doing this could basically cause the same result, as the cost of more support could easily make it so that they need to start charging or denying who can be a content creator. Neither of us have numbers, so it's all just guesses, but I'd be willing to wager that spending a TON of money and training on new CSRs wouldn't really change all that much, especially when telling someone explicitly "you were banned because X" is almost never a good idea, all this would do (in my opinion) is create a much more expensive, slower, and more annoying version of the same problem.
2. I think this would help, but it's not going to prevent mentally ill people from being mentally ill. They aren't going to say "oh well this removal was justified and was consistent with the others", they are going to find a perceived wrongdoing and will latch on to that, because they are mentally ill.
3. I've come to the conclusion that this is literally impossible. You can't be politically neutral. People aren't politically neutral, and therefore the things they create or moderate can't be politically neutral (whether they mean to do it or otherwise). Even algorithms that are created by people can show biases.
I mostly doubt that without the underlying current system companies will act the same way. In this case, I doubt they'd all have as much of a push to ban the same people at the same time example.
It's also a rather unconvincing argument when there are so many blatant instances of service abusers getting away with it on platforms that can afford very talented employees. In short, whatever it is they're doing is already quite ineffective. While in theory it could be a little more ineffective if we knew what they were doing, it's also possible that they could be a lot more effective if they changed what they're doing and were transparent about it. A hierarchical reputation system (vouching or invite-style) would solve many issues in many domains, for instance; its main downside is during hyper-growth phases where you need onboarding to be as frictionless as possible. But for a big established company like ebay, I think requiring a new account to be vouched for by an existing account which takes on some risk if the new one turns abusive would be quite doable.
At least in your IG example the ban is finite. I don't want the law to be used so bluntly but I'd really prefer if all bans had to be time limited, even if only technically where due to exponential scaling for repeat offenses the time exceeds expected human lifespans.
The platform should ban the user, with an explanation written by a human being explaining what rule they broke. And no, "our shiny new machine learning algorithm flagged your account" doesn't cut it. Arbitration should be an option. I could write a lot more, but hopefully you get the idea.
I think the underlying problem here which will have to be addressed at some point is that there's no recourse and often no real communication when a company decides to ban a person or delete content.
There needs to be a consistent freedom-of-information style mechanism where someone can request a review of a decision by a human, and a non-generic explanation, accompanied by a chance to download a copy of the deleted content.
I don't have a strong feeling on the banning of the account. I know companies typically can't comment for legal reasons, but there may be good reasons for this, sometimes there may not be.
The problem is that people invest in these accounts. They buy the right to some content and that is taken away from them. In some cases they also invest socially in the account, like an email address.
I don't think companies can have it both ways. They can either not ban accounts like this, or they can ban accounts and refund any purchased licences, and provide the ability to transfer data out and set up redirections as necessary (an auto-responder for email perhaps?).
I was under the impression that bans were automated, and appeals were handled by humans. I think it would be better if ban “suggestions” were automated and actual bans always came from a human.
I accept that big tech services will occasionally have to ban people for rules violations. But to be banned and not told why is absolutely inexcusable.
Whatever the reasoning behind these suspensions, I think it's very rude and unprofessional to simply suspend accounts without giving some warnings and some time for a user to comply. Most of these popular online services, including the tech giants, seem to prefer this rude approach. Not having any reliable human support makes the problem worse. Since you can't provide human support, just give some days to comply and then suspend.
2. Even with human support, companies like this would continue to be incentivised not to share what exactly triggered the ban in order to protect their systems from spam and abuse. You can’t provide services to billions of people without automation.
3. Humans reviews would be performed by humans. You’re replacing an imperfect system with another imperfect system except now things are slower and cost more.
reply