Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I would argue that cryptography wouldn't violate that aspect. The intent isn't to violate human rights or civil liberties, even though it can certainly be used that way. So too can a text editor or a printer driver or scheduling software. I think you're being a bit broad.

To me, it reads fairly clearly as a stand against code whose specific intent is unethical, not it's potential and unrelated uses.



view as:

Yeah, I get the intent argument, but I don't think it has a good separation of concerns, for lack of a better term. It's written in such a way as to state that if I know it will be used for bad purposes, then I should not write the software. And as you stated, even a text-editor could be interpreted that way.

I could even turn this around in a different way, let's assume I decide that I want to help catch bad people. I do this by writing some software that helps deanonymize connections. This can be used to help stop DDOS attacks, track down sex traffickers, etc. So it's intent is good, but it will also be used by Iran, Syria, Russia, China, USA et al. to track down dissidents.

I believe this software should be unethical by these standards, b/c of it's potential misuse/abuse, it was written with good intent! So is that unethical or not?

The reason that I dislike that rule, is that it places the use/abuse on the developer, it's really the operator of the software that is at fault. Right?


> The reason that I dislike that rule, is that it places the use/abuse on the developer, it's really the operator of the software that is at fault. Right?

I think that in practice I disagree with this. Certainly final moral responsibility lies with whoever misuses a tool, but on a practical level people should be aware of the primary or predictable results of their work.

If you create a cryptography system and release it publicly, you should be aware that it will be used by people hiding unethical things. If you build a deanonymization tool, you should be aware that it will be used for surveillance by people with ill intent. These are statistical certainties - if your tool is good, it will be used in these ways, and you can't claim surprise when it happens. It's like the stochastic terrorism question, where you can't know what an individual will do but you can easily predict that a system will produce violence somewhere.

None of this means you shouldn't do those things. Inventing TNT wasn't evil just because it's been used for violence, and when it comes to something like cryptography there's a real case for "this will be built eventually, so we have to live with it". With tracking, I sometimes feel the moral question is greyer, especially since the results are system-dependent and not 'inevitable'.

So yes, build these things, and accept that they'll be used for all sorts of purposes. But do consider the risks, and be aware of degree of harm. Building a tracking system that Iran might use someday is very different from building one for Iran, knowing what will be done with it.


Legal | privacy