What I hear in your posts is an assertion that "technology developers proactively deciding to make some kind of specific privacy/security compromise that they think is the right one" is undesirable.
My point is that "technology developers proactively deciding to make some kind of specific privacy/security compromise that they think is the right one" is unavoidable, since any attampts to "avoid" of that choice by leaving options open simply means proactively deciding to make a different compromise that also means that societies who want a different tradeoff (e.g. that the technology must ensure that those options are impossible) don't get the tradeoff they want. Opening one door closes others, in this space one does not simply "add options" without implicitly trading off others.
IMHO the larger thread here is having a technical discussion about how we should design payment systems, of which privacy preservation is one factor that's not axiomatic. Your presumption that the systems must be privacy-preserving and as powerful (on the front of privacy-preservation) as possible seems exactly like a case of a technology developer proactively deciding to make some kind of specific privacy/security compromise that they think is the right one. And not really a compromise, but a point at one extreme on that tradeoff scale, which means that societies who want effective anti-fraud measures, anti-tax-evasion measures and ability to recover funds with legal means without the cooperation of wallet-holder don't get it; and IMHO it's quite clear that societies generally do want a tradeoff like that, as evidenced by all the laws societies have chosen to pass. The societies in general are not willing to trade off these factors to gain privacy preservation, and privacy-perserving payment systems get made only because developers proactively decide to make specific tradeoffs that they personally think are the right ones.
Leaving aside differences in values and the "ought" part, I also fundamentally disagree with the factual points you make in the first paragraph. I disagree that it's vastly harder to turn a security-preserving system into a privacy-preserving one than to turn a privacy-preserving system into a security-preserving one. IMHO many decades and much, much, much more effort has gone into building security-preserving systems than privacy-preserving systems, and we still haven't succeeded at the former, as evidenced by all the loopholes in electronic transfers and cash controls that still leave enormous space for money laundering. Cash is probably the counterexample of a technology that's somewhat "powerful" w.r.t privacy preservation and has proven to be very, very hard to "selectively weaken" effectively.
I'd say that to "selectively weaken" a powerful privacy-preserving system is definitely not easier, I'd say that it's pretty much impossible - since if a society would choose an ability to track specific transactions (e.g. money laundering or drug trades) then a selective weakening does not achieve that goal, and even 99% weakening does not achieve that goal. If there are multiple channels available, then achieving that requires that all channels are privacy-breakable, since otherwise the malicious transactions would all get funneled through the privacy preserving channel. The society can (and should!) apply privacy-breaking selectively, but if the technology can't ensure that these specific transactions are privacy-breakable despite some fraudster trying to preven tthat, then the society doesn't get the tradeoff it wants, that goal requires that 100% of the loopholes are closed - which IMHO is harder than ensuring that some transactions are privacy preserving; especially since you can have privacy preservation even if many transactions are non-private - as if you have multiple channels available, then you can choose to use only the privacy-preserving channels.
The second disagreement is with the statement "Using this approach means that society gets the largest range of technology to choose from." My point is that societies may reasonably want choices which require that other choices are impossible - providing extra choices is not necessarily a net benefit, as the mere availability of option A denies the society some choices. For example, if a society chooses that a certain class of payments should be taboo (for whatever reason), then providing an extra choice of an uncensorable channel does not mean a larger variety of choices - it means denying the society one choice - the choice they wanted, to ensure that taboo-payments don't get made - just to provide another choice that they did not want as much.
Perhaps that's the proper framing of this dichotomy? Maximum-choice-of-policy-outcomes is incompatible with maximum-choice-of-technology-options, there's a tradeoff.
Rather than writing thousands of words about what society should do, my point is much simpler. To give an analogy: a private payment technology can be a powerful system like a general purpose web browser, or it can be like a more hobbled system that can only browse pre-configured web pages and image formats chosen by the developers. Both options are viable, however: the problem comes up when your use case (or your country’s use case) calls for a “browser” with a very specific set of restricted functionality that is greater than the functionality offered by the hobbled system that’s available.
In this case you have a choice. You can ban all browsers entirely. You can use the hobbled browser that doesn’t meet your needs because that’s what’s available. You can try to add new functionality to the hobbled browser, which can be a hard path to follow. Or you can hope that instead of a hobbled browser, there is a more powerful full-featured browser that you can strip down to have the specific set of features you want. In the best case this is easy: just a matter of changing a configuration file. In the worst case maybe you have to snip away some code. To continue this silly analogy: it’s vastly easier to remove PDF support from Chromium than to write a brand new PDF renderer into software that chose not to have one.
Obviously I’m going to have to ask you to take my word that in this case a powerful privacy-preserving payment system stands in for a full-featured browser, and it’s relatively easier to “strip (privacy) features away” from a strong privacy system then to make a weak system more private. I’ve spent a good chunk of my life thinking about this exact problem so I feel confident making this case.
The remaining point you raise is essentially the following: the mere existence of privacy-preserving payment systems deprives societies of choice. This holds in the same way that the existence of, say, Chromium or Firefox makes it impossible for some country to plausibly mandate a browser that can only visit selected web sites or use file formats chosen by the government.
The best thing I can say about this argument is: tough luck. Better (centralized) privacy-preserving payment technologies exist. You can’t make them go away, any more than you can hope that Chromium or Firefox will stop existing. If the success of your preferred system depends on the non-existence of other technology, and that technology is already out there, then you need a better plan.
My point is that "technology developers proactively deciding to make some kind of specific privacy/security compromise that they think is the right one" is unavoidable, since any attampts to "avoid" of that choice by leaving options open simply means proactively deciding to make a different compromise that also means that societies who want a different tradeoff (e.g. that the technology must ensure that those options are impossible) don't get the tradeoff they want. Opening one door closes others, in this space one does not simply "add options" without implicitly trading off others.
IMHO the larger thread here is having a technical discussion about how we should design payment systems, of which privacy preservation is one factor that's not axiomatic. Your presumption that the systems must be privacy-preserving and as powerful (on the front of privacy-preservation) as possible seems exactly like a case of a technology developer proactively deciding to make some kind of specific privacy/security compromise that they think is the right one. And not really a compromise, but a point at one extreme on that tradeoff scale, which means that societies who want effective anti-fraud measures, anti-tax-evasion measures and ability to recover funds with legal means without the cooperation of wallet-holder don't get it; and IMHO it's quite clear that societies generally do want a tradeoff like that, as evidenced by all the laws societies have chosen to pass. The societies in general are not willing to trade off these factors to gain privacy preservation, and privacy-perserving payment systems get made only because developers proactively decide to make specific tradeoffs that they personally think are the right ones.
Leaving aside differences in values and the "ought" part, I also fundamentally disagree with the factual points you make in the first paragraph. I disagree that it's vastly harder to turn a security-preserving system into a privacy-preserving one than to turn a privacy-preserving system into a security-preserving one. IMHO many decades and much, much, much more effort has gone into building security-preserving systems than privacy-preserving systems, and we still haven't succeeded at the former, as evidenced by all the loopholes in electronic transfers and cash controls that still leave enormous space for money laundering. Cash is probably the counterexample of a technology that's somewhat "powerful" w.r.t privacy preservation and has proven to be very, very hard to "selectively weaken" effectively.
I'd say that to "selectively weaken" a powerful privacy-preserving system is definitely not easier, I'd say that it's pretty much impossible - since if a society would choose an ability to track specific transactions (e.g. money laundering or drug trades) then a selective weakening does not achieve that goal, and even 99% weakening does not achieve that goal. If there are multiple channels available, then achieving that requires that all channels are privacy-breakable, since otherwise the malicious transactions would all get funneled through the privacy preserving channel. The society can (and should!) apply privacy-breaking selectively, but if the technology can't ensure that these specific transactions are privacy-breakable despite some fraudster trying to preven tthat, then the society doesn't get the tradeoff it wants, that goal requires that 100% of the loopholes are closed - which IMHO is harder than ensuring that some transactions are privacy preserving; especially since you can have privacy preservation even if many transactions are non-private - as if you have multiple channels available, then you can choose to use only the privacy-preserving channels.
The second disagreement is with the statement "Using this approach means that society gets the largest range of technology to choose from." My point is that societies may reasonably want choices which require that other choices are impossible - providing extra choices is not necessarily a net benefit, as the mere availability of option A denies the society some choices. For example, if a society chooses that a certain class of payments should be taboo (for whatever reason), then providing an extra choice of an uncensorable channel does not mean a larger variety of choices - it means denying the society one choice - the choice they wanted, to ensure that taboo-payments don't get made - just to provide another choice that they did not want as much.
Perhaps that's the proper framing of this dichotomy? Maximum-choice-of-policy-outcomes is incompatible with maximum-choice-of-technology-options, there's a tradeoff.
reply