All the bad actors will do is steal someone else's credentials. Just like they do now. You still won't figure out who they are. They'll set up networks of fake names, fake friends, fake businesses, fake pages, fake comments, fake posts. They're not going to just roll over and start doing shady shit in their own names.
I don't think that bad actors are the problem you think they are. They still exist in our society and we manage.
The issue in my view hinges upon making it easier for people to choose who they interact with online, including tools for delegating those decisions to zero or more third parties.
We'll need a good decentralized identity system first.
Reasonably. Sybil attacks still happen with frequency.
I've also noted that creating a new account with little visible connection to another can also re-establish social connections fairly quickly. After getting booted off G+ at one point, (accessing the site via Tor), I decided to tweak the entire notion of a trusted identity (having already gone pseudonymous) by naming my new account "The Real Slim Shady". I managed to re-acquire most of my core 20-40 followers over a few days.
(Being able to corroborate the connection from other sites helped, though many people twigged on who TRSS was even before checking.)
I've also seen known users turn hostile in bad ways over time. Much as I've had the experience of knowing people who've ended up committing murder, or serious financial crimes. Vetting is a hard problem.
The conclusion I always seem to come to is you eventually want to be able to trust those within your network. I believe that requires some level of initial openness to prove who you are to a personally selected authority you decide to trust, who can verify your identity, and then allow you to communicate/engage with your real name or one-to-many pseudonyms, so if your actions are outside whatever rules exist for that network are violated then you could get a temporary timeout ("sit in the corner and think about what you did"), to permanent ban, to if it is considered criminal by society then reporting the person to appropriate authorities; we can't have inherent fear and distrust a chain of command, there is good in the world - we of course must stand strong, strongly together, and stand guard in order to fight against abuse of power, inauthentic behaviour - and so on - in chain of commands.
How will users of the Gab network know if it is INSERT_NATION_NAME agents, paid and incentivized to manipulate them and cause unrest in a societal structure we call democracy? Perhaps they're not even thinking or caring about it and its consequences, perhaps they're even simply happy they have a helping hand?
From what we've seen it takes very few bad actors to infiltrate a nation's foundational structures once the atmosphere of a population is disenfranchised enough, once we're disconnected and disengaged enough, and that has been accelerated by the cheap economies of scale the internet has afforded.
At the end what I hope for is that security services of democracies are being intelligent and allocating resources to fully and carefully infiltrating these groups (carefully as to not escalate the situation) to know who those who are involved is: this of course comes down to potential overreach, which must be addressed not because you may more easily know/discover who bad actors are - but because it may be bad actors who actually someday may get hold or access to these systems.
One option is to do what illicit sites have done forever: require an invite, such that the bad deeds of the invitee reflect on the inviter. Doesn't require real names on the site, but the inviter knows who they are inviting.
Circles turned out to be immensely painful to manage, for numerous reasons. The easiest approach was often to simply nuke them all and start over.
Surprisingly, this was remarkably non-lossy. One of the more interesting episodes I had on G+ was when my primary profile was blocked (authentication issues) and I created a new one. In a fit of pique I named that "The Real Slim Shady", and started connecting up with a few key contacts. Within a few days I'd certainly managed to restore my core list of about 100 or so connections. Several people commented on how clear it was to them that this was in fact the same individual.
(I did confirm the association through several other sites, which helps.)
It also revealed the value of quashing even only a small number of sources of major noise.
This episode gave me a few insights on the nature of identity and trust. It also reminds me of how rapidly even massively-damaged cities and countries (war, natural disaster, etc.) tend to recover, especially if essential culture remains intact. Perhaps not entirely per their previous trajectory, but often with remarkably little long-term impact. This contrasts with trying to raise a specific region up out of poverty, a lack of institutions and infrastructure, and often a low-trust / high-corruption culture.
Methods to deal with malicious actors in your system:
- Require toilsome identity verification. Things that are in short supply and are difficult to get and uniquely identify a person. Examples include a phone number, credit card, driver's license, mailed letter, etc.
- Require a referral, both for accounts and for new packages. Don't allow a signup unless the user has a referral code generated by another user with good karma. This isn't fool-proof, as a user that does get an account can then generate more accounts. But it makes it easier to revoke them en-masse, and forces users to be more scrupulous about who they refer, as you can block the referrer as well as the malicious user.
- Require community review, both of new users, and new packages. New users/packages are sent to a mailing list of moderators and somebody has to approve it, but someone who notices a problem can also reject it, even after it's been mistakenly approved. Slower, but there's more eyeballs to spot malicious stuff. This is more common among open source projects. (Growing your moderator list is important, as well as automating as much of the review process as possible; obviously PyPI currently has a shortage of moderators, but they should have a huge number of them by now!)
- Don't allow new users to do certain things until enough time has passed or they have accrued enough karma points. May require fixing bugs, answering questions, etc; work which benefits the community and most malicious actors wouldn't invest the time and effort in. Again not fool-proof, but definitely increases the time and difficulty for successful attacks.
- Captchas. These can obviously be worked around, but are a minimum-effort way to avoid bot signups.
For better defense-in-depth, combine multiple methods.
It might be neither easy nor impossible. You could imagine s reputation system that depended on what others thought of you. Start the network with just people the developers know personally. Admit new people when enough existing members couch for them. Sometimes fake cancers will appear, but they'll sometimes get sniffed out, and the people who let them in punished somehow (temporarily blocked? Delete photos at random? Idk) strictly enough that people generally don't want to let in bad actors.
Other others have stated below this does in fact become a cat-versus-mouse Sybil attack scenario where the barrier to entry isn't high enough to stop a bad actor from creating many websites. Like online identity and reputation has to be tied to more than just an email address.
All social media sites are susceptible to this simple hack:
1. Set up IRC channel.
2. Wait for friends to arrive.
3. When there's ten, post new content to social media.
4. Post link to the IRC.
5. Have you and 1-2 close friends downvote everything else.
All of the algorithms I've seen so far are susceptible to this.
I do not engage in this practice for many reasons, some ethical and some practical.
On an ethical level, it's toying with people's faith in society; some trolls argue successfully that this faith is misplaced and should be shaken; I counterargue that without strong signaling to that effect, it becomes vandalism.
The practical is that anyone who trolls around the sites can see who is engaging in this practice and rapidly realizes they're behaving badly. I remember the crucifixion of davisreis666 on Reddit as an example.
Would this be surprising to anyone who has spent time on Facebooks, Twitter, Reddit, or 4Chan? Social networks seem to have become where people to go release their Id and indulge in hostile behavior which would not be accepted in meatspace.
I suspect that the researcher knew that this was possible when she allowed herself to be talked into turning off the no-approach radius. I assume she wanted to see and record where this went. I wonder how easy it would be to reinstate the circle of protection? Is there a simple “shields up” command to restore it? What is the effect on an avatar already in your space?
or to have meaningful conversation you'll have to register so identification and then be under scrutiny for trollish behavior, at least to minimize bad actors, you'll never get rid of all of them.
reply