Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Are you saying that you want to pay to be provided with harmful text

This existence of “harmful text” is a bit silly, but lets not dwell on it.

The answer to your question is that I want to be able to generate whatever the technology is capable of. Imagine if Microsoft Word would throw an error if you tried to write something against modern dogmas.

If you wish to avoid seeing harmful text, I think that market is well-served today. I can’t imagine there not being at the very least a checkbox to enable output filtering for any ideas you think are harmful.



sort by: page size:

> The scariest to me is on professional standards and ethics

OK. What standards and ethics would you recommend we adhere to?

First, do no harm? One, define harm. If I'm depriving a corporation of money by automating something for consumers, is that harm? How about if I write a clone of netcat, am I liable for its use in nefarious scanning of ports?

Thou shalt not write bugs? That would be awesome, were it possible. I can't think of many developers who wouldn't want to get to this point, assuming of course it didn't result in a 1 month project plan to write a sorting algorithm.

http://googleresearch.blogspot.com/2006/06/extra-extra-read-...


>So it seems legit to me to ask whether those people questioned the technology they were developing, within their moral framework

I mean, it's a useless question of pedantry. It rarely changes the outcome.

I cannot say that I am a free speech absolutist, there are many people that write software that are, and in addition believe that software is speech. Which particular rules of this framework do you want to apply? I think greedy people that value money above all else are terrible, but they exist and create software regardless of your abohorance of doing so.


> Could you point me to a quote taken in context that does that? I just reread the document and fail to see where the manifesto questions an individuals capacity of doing a tech job. I'm honestly trying but fail to see the offense.

I'd second this. I've read it a few times and can't see the problem. To be explicit, I'm not saying there is no problem, I'm asking for help to see it.


> Ultimately it's not your content it's the publisher's content.

This is a dangerous reasoning which leads to such a sad things like TPM.

> When this content is altered without your knowledge who will you get upset at? Because it's without your knowledge and so the only assumption would be it must be the content publisher who did this.

The phone supports 2 sim cards plus Wi-Fi, so if somebody would replace all pro-Lisp articles with pro-Java ones, I just change a connection.


> Do you think that's an appropriate response?

No because it is not going to make much difference.

> Sorry, but I still see this as kneejerk reactions to spectacularly unlikely scenarios of "bad things happening" being proposed and regulated by people who don't care about reducing other people's freedom because it won't affect them personally.

Fully agreed on that one.

> I'm still unsure what you're suggesting "shouldn't be allowed" here?

This software has a ton of bad use possibilities, I just threw out the first one that I could think of, there are a whole raft of others.

> Open sourcing computer vision projects? Publishing on github?

No, it's inevitable. But there is currently no framework on how to deal with these things. Just because you can doesn't always mean that you should. There are a ton of things I could do that are legal but that does not mean that all those things have a net-positive effect on the society we live in and I think that the ability to build these systems comes with some responsibility.

> Me? I'm 100% for publishing this(and similar) projects - because the tech is already out there and being used. Pretty much every towtruck and repo man has had this tech running for 5+ years, and almost nobody knows.

Yes, but they are limited in quantity and enough of a quantitative change is a qualitative change.

> (I know, lets ban _ideas!_... (Sorry, that's way snarkier than intended...))

I think I beat you to that:

https://twitter.com/jmattheij/status/670367390828535808


> Then a demand for membership payment

You sound like a machine-based text truncation is unacceptable.

But that's software developers' job, dude... Should we not do machine-based stuff at all because from time to time a harmless awkward situation pops up?


> where do morals come into play here?

I want to be able to check whether the software does anything harmful for me, and to be able to fix it and adapt it to my needs. Proprietary software effectively curtails the possibility of doing that.

Say I am allergic to nuts. Fortunately, when I buy some processed food I can check easily whether it has nuts or not. Now imagine living in a world where food makers sneakily put nuts in their products, in order to "enhance the user experience". And not only that, but they took great efforts to hide this information from the consumers. After all, most people are not allergic, so no big deal here. I would say that this is immoral.


> I just do not agree with the idea that if all are doing it, there must be some deep truth beneath.

Hypothetically, lets say you design a system that, for every legitimate 1000 users accessing it, 999 fail to understand it enough to gain their legitimate access.

Would you still be comfortable telling the people that paid for the system that it's not the system that's wrong, but the users?


> Each technologist has a choice whether or not to work on these systems. The answer should always be NO.

Thank you for putting it this way. People come up with all kinds of rationalizations to work on such things in exchange for a little more money.

They need to be shamed.


>How about we stop pushing these alternatives that ensure creators never get paid?

My client, my computers, my rules.


> My problem is I don’t like or agree with the people picking the targets, but that’s a whole other argument.

People pretend they can't possibly know how things will get used, but this has never been true, and never been less true than today.

http://tech.mit.edu/V105/N16/weisen.16n.html


> Anything that nags you about the software you used last week?

Last week I had to log into some systems using the software from companies convicted of criminal offences, which include monopolism, bribery, fraud, harassment, racism, sexism, digital trespass/computer misuse, amongst other things.

It nags my conscience me a little, because my use of these systems effectively lends my support to criminals, which I do not want to do.

Unfortunately my choices are curtailed because these criminal organisations have insinuated themselves into essential private and government services, and their actions as monopolists have killed alternatives.

This seems ethically wrong. I believe it's my right not to be forced to support people who in the eyes of society have been deemed to inflict objective and real harm upon others. So, I look forward to laws that uphold that right to choose ethically.

I wonder if anyone else has ethical issues with software they need to use?


>I think a similar system would be a reasonable compromise here as well.

How would something like you're suggesting work?

Under what circumstances would it be appropriate to moderate and under what other circumstances would it be inappropriate? Just as importantly, who gets to decide what's appropriate/inappropriate?

And to whom would this apply? Section 230 applies to everyone, including you, me, HackerNews and your great aunt Sally.

If we limit section 230's applicability, who decides where it should apply and where it shouldn't?

I'm not objecting to your suggestion, I'd just like to understand how you see such a 'compromise' being implemented.

I'd welcome your thoughts!


> I just don't want to have to worry about what will become this politically incorrect in the future when I'm writing code and documentation.

Then don't say/write bigotted things. Honestly, it's like asking people not to write spelling or grammar mistakes in documentation. It's not that hard.


> There is no really opt out if everybody implements techno-feudalistic software patterns.

I have bad news for you: https://www.gnu.org/philosophy/right-to-read.en.html


> Does anyone have any argument for why this right would be a bad thing?

For one thing, that is how you get viruses. This is amusingly similar to the mask refusal arguments.

> People would get bad software on their phones, but last I checked, this is happening already

"My proposal would lead to bad things, but there already are bad things" is a terrible argument.


> The problem is only education.

I don't want to be educated to use something and I'm a developer for 20+ years.

I wouldn't consider to use any services unless anyone (as in company staff) wouldn't be bothered to use it.

I can't really recommend Bitwarden Send when pasting files in a self hosted Mattermost chat room would suffice for internal use.


> I actually think it's a great idea for any developer.

> With that said, it's obviously not in the end user's best interest

I think you mean, "a great idea for developers who don't mind abusing their users"


> The internet has its own morality that is somewhat untethered to legality: use what you can get, pay for what you feel you should.

I see you chose more of a passive tone here, as though you're just a cog in the machine, totally helpless and unable to take action on your own.

The Internet is what people make of it, in part based on the sum of individual choices.

> And you can scream

Not screaming.

> it just ads to the Streisand effect

Possibly. Maybe it alerts GitHub devs to the problem and they try to come up with a solve. If font creators read it, they can work with repo owners to get things in order. For my part, it pointed out a problem that I can take steps to mitigate in the future with my own repos.

Anyways, obviously I can't stop you or anyone else from taking digital goods at will. But in part, I wish that people would be more transparent about their motives instead of talking themselves into thinking that they're not actually doing something wrong.

Just say "I take things that don't belong to me because I deserve to have them, it's easy to do so, and I won't get in trouble". At least it'd be an honest assessment.

next

Legal | privacy