Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think to an high degree it depends on whether the individuals could realistically prevent new instances given resources (e.g. access, head count, number of other high priority tasks, etc.)


sort by: page size:

I would consider the scenario where it was not just 1 person/group, you could have more then 1 person trying to cheat the system in favor of his favorite tribe or even just trolls practicing their scripting skills.

Maybe. We're talking about a hypothetical power struggle, one aspect of which is that one group subverts the computing resource of another group. Even if at some point the second group realizes this has happened (potentially difficult when not only their decompilation tools but also their intragroup communications are subject to interception and alteration by the first group) there's no guarantee that they will ever regain control of those resources. The first group may be able to leverage their control enough to severely impact the second group's prospects in every aspect of the struggle.

More generally, when groups of people are engaged in a power struggle, there's no guarantee that things will return to the status quo ante.


Yeah, I figured that. I imagine it's a bit challenging to model though.

You'd have to be running every instance simultaneously and have them all observing and reacting to each other.

Also gets into some interesting "morality" type of questions. As in if an instance cheats to get ahead, are there consequences?


Wow, that's crazy! I didn't even think about that, but I can totally see how it could be possible. I imagine that the only reliable way to prevent it would be to have multiple redundant systems that operated via consensus.

Unless relays come together and agree to block someone or something. As long as there are enough relays this is improbable

Of course not. If instance A does not like instance B and blocks it, there is reason why there can't be an instance C that federates with both A and B.

If these three were the only instances in the world then, yes, A could be so strict it also blocks C because of it's communication with B. But since there are hundreds of these instances and since there is no limit on how many can be created this will not be a problem.

Admins can be dictators on their own instance, but there are a lot of admins to choose from. If a big instance goes sour, it is very easy to migrate to another one.


Correct, though I'd argue if your new resource is hostile, it will not be productive for either party to work together.

How well has this been characterised when some participants are actively malicious rather than wrong?

Reason I ask is, you can absolutely bet that states will attempt to cause interesting failure modes in other states’ A.I. — imagine if self-driving cars had a literal blind spot for the fifteen senators most aggressive towards [rolls dice] Agrabah?


I think for big orgs there could be a two-man rule system like they use in minuteman missile control, two designated members have to both agree to delete a repo or anything seriously irreversible.

Who would win, the willpower of one person who's got a dozen other responsibilities, or a team of a hundred experts whose full-time jobs are to break said willpower? ...when latter only have to win once and the former has to win every single time. The power imbalance here is so severe that it's like saying 'just don't get shot' as an answer to gun violence.

Good point, cancelling concurrent actions that started another concurrent actions is extremely complicated. I don't think humans can solve it efficiently in real life. There is usually a mess left behind if plans are reverted or changed significantly.

Is such a conflict likely to occur with a single user?

Who determines sufficiency (well, a jury here…)?

If an ethnic conflict occurs in a relatively obscure region are they going to spin up new moderators on the AWS spot market?

These things aren’t perfect or fast.


Yes! But in a way that works via consensus and that doesn't establish unnecessary hierarchies. For example, you could have people coming in voluntarily agree to the penalties and have the primary stakeholders vote on penalties.

You still would need just 1 participant, it's the computation part that would be open to any number participants in order to reduce the possibility of collusion.

Everything that can be done is mutually exclusive to some extent on allocated resources.

The roughest problems are those that emerge in interfaces between individual tribes within an organization. Information loss can be very easy and very pernicious in those cases.

This depends on multiple factors, and even when it's true I don't see the issue in trying to avoid these issues before conflicts happen.

A system of checks and balances overseen by several humans can have orders of magnitude lower error rates, though.
next

Legal | privacy