Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Wow, that's crazy! I didn't even think about that, but I can totally see how it could be possible. I imagine that the only reliable way to prevent it would be to have multiple redundant systems that operated via consensus.


sort by: page size:

To go one step further, have three, or more, identical fire walled systems running and have them vote on every calculation with the votes sent to a hardwired switch, that prevents the system from advancing if there’s a disagreement, and shuts off power to the RAM if there’s too many in a row. The chances of any compromise happening to all the systems within the same nanosecond would be infinitesimal.

Collusion among multiple persons would break that, whereas a verifiable distributed system would be unaffected.

To wax philosophically on this topic:

From a systems perspective, the observed phenomenon is that the system enters a state where agents no longer meaningfully exchange information and further attempts at information exchange paradoxically result in reduced consensus.

This isn't a problem unique to HN, but is one that HN has systematically tried to avoid. I have no answer, but am taking time to frame and pose the question properly, so that we can consciously acknowledge the phenomenon and address it properly.

One might speculate that if people could systematically avoid entering such a state things such as war wouldn't be possible, I imagine PG would agree.


Unless relays come together and agree to block someone or something. As long as there are enough relays this is improbable

How would such a joint-coordinated effort work? Is such collusion possible at this scale?

I think for big orgs there could be a two-man rule system like they use in minuteman missile control, two designated members have to both agree to delete a repo or anything seriously irreversible.

The Two Generals Problem is a great thought experiment that describes how distributed consensus is impossible when communication between nodes has the possibility of failing.

https://en.wikipedia.org/wiki/Two_Generals%27_Problem


Which would likely go alongside it; the only thing MAYBE it would allow is everyone to continue to play "in one big agreed-on world".

I suspect it's unlikely and better handled by just using a database and a trusted group but it's the only idea I have been able to come up with.


So, let the two software systems fight with each other during emergencies and hope it somehow results in something positive?

That sounds like a terrible idea.


Yes, as long as...

I totally agree that checks and balances -- in the abstract, some form of all-to-all coordination between the different components of the system -- are probably the easiest way to healthcheck any system like this. Otherwise you fall into "multipolar traps" [1] where each actor, working independently and acting game-theoretically optimally, can throw the system into undesirable states.

Checks and balances should be implemented. But is this adequately true in the actually-existing implementation of the system?

[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Uh.. any single person who owns more than 50% of the computation can basically mess everything up, unless I'm mistaken?

So, maybe you mean 3 independent parties.. but what's to stop those 3 people from colluding for their own benefit, etc..

Anyway you need a hell of a lot more than 3 to run it.


Maybe. We're talking about a hypothetical power struggle, one aspect of which is that one group subverts the computing resource of another group. Even if at some point the second group realizes this has happened (potentially difficult when not only their decompilation tools but also their intragroup communications are subject to interception and alteration by the first group) there's no guarantee that they will ever regain control of those resources. The first group may be able to leverage their control enough to severely impact the second group's prospects in every aspect of the struggle.

More generally, when groups of people are engaged in a power struggle, there's no guarantee that things will return to the status quo ante.


That seems reasonable. I can think of many human/organizational/rule reasons that would force the breakup of a monolith representing many separate-but-related concerns.

or coordinate them so that doesn’t happen?

This “big” problem is solved by avoiding unnecessary global consensus, when agent-to-agent consensus is all that is necessary.

See the Holo / Holochain projects.


Such or will have horrible politics as all those autonomous people fight with each other over who will call the shots and who will control non-existent resources.

There is handling uncertainty and there is organizing workplace to maximise unceirtenity. You described the latter.


Yes, I heard of that, but there is also the effect of thousands of arbitration at one time. https://www.google.com/search?q=multiple+arbitration+attack&...

Mirroring doesn't deal with this because there's always a single target, elimination of which disrupts the whole system, since some time will pass until yet another person puts a target on his back, and there is the problem of finding out which one is legitimate in case if there are few successors.

Having both sides agree to that and not cheat would be an unprecedented amount of coordination; you could probably write a sociology paper just from this observation :).
next

Legal | privacy