I think to an high degree it depends on whether the individuals could realistically prevent new instances given resources (e.g. access, head count, number of other high priority tasks, etc.)
I would consider the scenario where it was not just 1 person/group, you could have more then 1 person trying to cheat the system in favor of his favorite tribe or even just trolls practicing their scripting skills.
Maybe. We're talking about a hypothetical power struggle, one aspect of which is that one group subverts the computing resource of another group. Even if at some point the second group realizes this has happened (potentially difficult when not only their decompilation tools but also their intragroup communications are subject to interception and alteration by the first group) there's no guarantee that they will ever regain control of those resources. The first group may be able to leverage their control enough to severely impact the second group's prospects in every aspect of the struggle.
More generally, when groups of people are engaged in a power struggle, there's no guarantee that things will return to the status quo ante.
Wow, that's crazy! I didn't even think about that, but I can totally see how it could be possible. I imagine that the only reliable way to prevent it would be to have multiple redundant systems that operated via consensus.
Of course not. If instance A does not like instance B and blocks it, there is reason why there can't be an instance C that federates with both A and B.
If these three were the only instances in the world then, yes, A could be so strict it also blocks C because of it's communication with B. But since there are hundreds of these instances and since there is no limit on how many can be created this will not be a problem.
Admins can be dictators on their own instance, but there are a lot of admins to choose from. If a big instance goes sour, it is very easy to migrate to another one.
How well has this been characterised when some participants are actively malicious rather than wrong?
Reason I ask is, you can absolutely bet that states will attempt to cause interesting failure modes in other states’ A.I. — imagine if self-driving cars had a literal blind spot for the fifteen senators most aggressive towards [rolls dice] Agrabah?
I think for big orgs there could be a two-man rule system like they use in minuteman missile control, two designated members have to both agree to delete a repo or anything seriously irreversible.
Who would win, the willpower of one person who's got a dozen other responsibilities, or a team of a hundred experts whose full-time jobs are to break said willpower? ...when latter only have to win once and the former has to win every single time. The power imbalance here is so severe that it's like saying 'just don't get shot' as an answer to gun violence.
Good point, cancelling concurrent actions that started another concurrent actions is extremely complicated. I don't think humans can solve it efficiently in real life. There is usually a mess left behind if plans are reverted or changed significantly.
Yes! But in a way that works via consensus and that doesn't establish unnecessary hierarchies. For example, you could have people coming in voluntarily agree to the penalties and have the primary stakeholders vote on penalties.
You still would need just 1 participant, it's the computation part that would be open to any number participants in order to reduce the possibility of collusion.
The roughest problems are those that emerge in interfaces between individual tribes within an organization. Information loss can be very easy and very pernicious in those cases.
reply