Wow, that's crazy! I didn't even think about that, but I can totally see how it could be possible. I imagine that the only reliable way to prevent it would be to have multiple redundant systems that operated via consensus.
To go one step further, have three, or more, identical fire walled systems running and have them vote on every calculation with the votes sent to a hardwired switch, that prevents the system from advancing if there’s a disagreement, and shuts off power to the RAM if there’s too many in a row. The chances of any compromise happening to all the systems within the same nanosecond would be infinitesimal.
From a systems perspective, the observed phenomenon is that the system enters a state where agents no longer meaningfully exchange information and further attempts at information exchange paradoxically result in reduced consensus.
This isn't a problem unique to HN, but is one that HN has systematically tried to avoid. I have no answer, but am taking time to frame and pose the question properly, so that we can consciously acknowledge the phenomenon and address it properly.
One might speculate that if people could systematically avoid entering such a state things such as war wouldn't be possible, I imagine PG would agree.
I think for big orgs there could be a two-man rule system like they use in minuteman missile control, two designated members have to both agree to delete a repo or anything seriously irreversible.
The Two Generals Problem is a great thought experiment that describes how distributed consensus is impossible when communication between nodes has the possibility of failing.
I totally agree that checks and balances -- in the abstract, some form of all-to-all coordination between the different components of the system -- are probably the easiest way to healthcheck any system like this. Otherwise you fall into "multipolar traps" [1] where each actor, working independently and acting game-theoretically optimally, can throw the system into undesirable states.
Checks and balances should be implemented. But is this adequately true in the actually-existing implementation of the system?
Maybe. We're talking about a hypothetical power struggle, one aspect of which is that one group subverts the computing resource of another group. Even if at some point the second group realizes this has happened (potentially difficult when not only their decompilation tools but also their intragroup communications are subject to interception and alteration by the first group) there's no guarantee that they will ever regain control of those resources. The first group may be able to leverage their control enough to severely impact the second group's prospects in every aspect of the struggle.
More generally, when groups of people are engaged in a power struggle, there's no guarantee that things will return to the status quo ante.
That seems reasonable. I can think of many human/organizational/rule reasons that would force the breakup of a monolith representing many separate-but-related concerns.
Such or will have horrible politics as all those autonomous people fight with each other over who will call the shots and who will control non-existent resources.
There is handling uncertainty and there is organizing workplace to maximise unceirtenity. You described the latter.
Mirroring doesn't deal with this because there's always a single target, elimination of which disrupts the whole system, since some time will pass until yet another person puts a target on his back, and there is the problem of finding out which one is legitimate in case if there are few successors.
Having both sides agree to that and not cheat would be an unprecedented amount of coordination; you could probably write a sociology paper just from this observation :).
reply