What are you talking about? You need to cite specific individuals. I'm one of the people who is skeptical of the ethics of training a huge LLM on code without the authors' permission, but I also think this is an appropriate move by Microsoft. It aligns the incentives appropriately.
But for folks that are negative on both accounts, maybe they've just learned their lesson from decades of watching Microsoft take the low road over and over again.
1 point by sbr464 3 minutes ago | parent | edit | delete [-] | on: Lerna adds clause to MIT license blocking certain ...
I respect their decision and rights, but I don't really understand this move. I also believe this sets a dangerous potential pattern within the developer community.
I personally don't agree with certain statements/immigration/ICE etc. but I'm more put back by this.
Coding is becoming easier and will increasingly include more of the general population (which is a good thing). This means it's about to become much more diverse in regards to religion, political beliefs, personal morals, citizenship, etc.
I don't mean this in the political/philosophical sense. I mean soon people will start showing up in Github/twitter comments, contributing pull requests, with a genuine interest about coding, who look like people you personally dislike. Maybe they are wearing a Trump t-shirt in their profile photo, but their code is great. Are you going to reject their pull request or ignore their comments?
Governments & company policies change frequently. There's also an unlimited combination of potential beliefs, moral stances, crimes by an unlimited number of people and companies. At what point would you decide to add or remove amendments to your license?
I also feel it's hypocritical to use a product owned by Microsoft (github), while calling them out in your license by name. I mean, are you protesting Microsoft or aren't you?
How do you know that an upstream dependency you are benefitting from wasn't created by one of these companies?
To highlight the humor of this line of thinking, why not block oppressive regimes, serial killers (>= 6 people, <6 are ok), certain religious groups with worse principals than ICE?
I just don't think this makes its case well, though there is a case to be made.
I have worked at places where one person should never have the ability to push unreviewed code into master because the company was huge and already successful and it'd be a security risk.
I have also worked with coworkers who fetishized the process part of stuff and wanted to create all sorts of rules with high overhead to solve entirely hypothetical problems from their imagination.
Ah. This is that thing called "tone-policing." I frankly have no idea if you believe the same fundamentals that I do; they generally fall under the ideas of "free software," and they are frequently abused by the large companies and I feel it is important to call such bad behavior out as what it is; especially when perhaps professionals may have been lulled into going along with it.
I'm fairly certain the "politeness" or whatever it is you're calling for here works against my goals here. I could be wrong about that, but either way I don't think what you're railing against is as near as important as the bigger idea I presented initially.
Someone committing poor quality LLM generated code and deeming it appropriate for review could create equally bad, if not worse, handwritten code. By extension, anyone who merges poor quality LLM code could merge equally poorly handwritten code. So ultimately it's up to their judgement and about the trust in the contribution process. If poor quality code ended up in the product, then it's the process that failed. Just because someone can hit you with a stick doesn't mean we should cut down the trees — we should educate people to stop hitting others with sticks instead.
"Banning LLM content" is in my opinion an effort spent on the wrong thing. If you want to ensure the quality of the code, you should focus on ensuring the code review and merge process is more thorough in filtering out subpar contributions effectively, instead of wasting time on trying to enforce unenforceable policies. They only give a false sense of trust and security. Would "[x] I solemnly swear I didn't use AI" checkbox give anything more than a false sense of security? Cheaters gonna cheat, and trusting them would be naive, politely said...
Spam... yeah, that is a valid concern, but it's also something that should be solved on organizational level.
First off I'm not policing anything, because, how would I have the authority to do that?
Secondly you object to me presuming I know what his responsibilities are. I don't claim to know what he codes. However I'm able to make a pretty good guess about what the scope of a MSFT senior developers job is and this kind of thing is not even close, unless it's a special case worked out in advance with others. It wasn't worked out in advance in this case because he says that he made the comments before getting feedback from more senior people on his team.
Finally I assume some of the down votes have mistakenly conflated my comments with trying to hide information behind corporate walls.
I advocate hiding nothing - and fully support the new generation of companies who believe in transparency and ethics.
Being transparent and ethical has nothing to do whatsoever with letting developers try and make legal decisions when it's not their area of expertise. It also doesn't mean that a company shouldn't work together across departments to try to decide the right thing to do.
Good management and coordination between roles don't preclude transparency or doing the right thing in anyway shape or form .
> it can prevent continued poor behavior which impacts others
I highly doubt that it would do anything to prevent that continued behavior. I cannot imagine a situation in which there is not a better and more appropriate solution to mitigating poor behavior that telling people "don't participate in this field". At best you will alienate them in such a manner that alternative approaches are less feasible and at worst you convert bad behavior from being passively to actively malicious. I'd be curious if you have a concrete example for when you think this approach has been beneficial in any context.
Note, I consider revoking credentials (e.g. disbarring an attorney or revoking a medical/engineering license) to be something entirely different as there is an active endorsement that needs to be terminated. Relevancy in this case would be to removing a package from a package/dependency listing service/manager.
Taken away by whom and on what basis? I would dispute that you need the ability to take away other individuals' ability to lawfully write software. That ability is bound to be abused for political reasons (which is also what it looks like when people have reasonably different ethical systems and one imposes his by force).
Anyway, the issue at hand is bad corporate behavior, not bad programmers. I don't see why we need to start licking our chops about the prospect of forming a blacklist against individual programmers.
Developers shouldn't have this kind of control unless your product is for developers. I say this as a developer.
The article mentions it only works if you hire insanely smart people, but that's also not a sufficient condition. You can have really amazing programmers, but if they don't have a strong understanding of your demographic and your goals, they're not "smart" in the way that's most relevant to this particular sort of decision.
I also dislike the phrase to begin with. Asking forgiveness rather than permission is often more effective, but that doesn't mean it isn't also irresponsible and a breach of trust. In an environment that doesn't claim this sort of non-regulation (i.e. most of them), if your integrity sells for the value of one commit and the risk of breaking or misdirecting your employer's project, you have more fundamental problems than potential bugs in production.
Saying they are not enforceable & saying that they are completely risk free for a potential hirer who wants to pay a nominal payment for coding exercises are 2 different things, especially considering that conflict of interest laws are enforceable.
As for the ethics of it, I find it odd that you are fine with working for someone who uses something you are vehemently against, but hey to each their own.
So it looks like Glyph Lefkowitz's "extremist" opinion on software ethics http://glyf.livejournal.com/46589.html was completely right. When a program does something the user doesn't want, the programmer is in the wrong. Programmer is to user as lawyer is to client. We need a recognized and binding way for programmers to submit to this code of ethics.
Those same quibbles applied to the policy before the addition of the LLM section: how does the NetBSD project detect if I copy & paste a bunch of code from my day job into a patch submission (and then lie about it)? Obviously, they can't. I, personally, don't feel like it's a failure of the policy if it relies on your contributors acting in good faith, because:
a) many people are acting in good faith, and their behavior will change as a result of this policy;
b) if someone wants to be a jerk and use an LLM after they were told not to, and is at some later time found out, it makes it easier for the org to act quickly and in a fair and consistent manner;
c) [more speculative as to the motives of the NetBSD project] normative statements by well-regarded institutions are useful in setting an example for other organizations to follow, so there is some political utility regardless of the practical efficacy of these rules.
I'm a little disappointed to see the original PR locked due to lphilips54's inconsistent statements.
While I think the OSS community should be polite and inclusive, I also think that we are all poorer if we ignore contributions due to author behavior. I'm confident that many authors have abhorrent political views and actions. While we should not elevate them as role models, there are times it's reasonable to just use the code.
I agree with a lot of this. Without some kind of official sanctioning, programmers lack a mandate to push back on management, which makes it much harder. On the other hand, programmer is such a diverse topic that any kind of sanctioning is bound to be inappropriate for the vast majority of programmers. Kind of like making an electrical engineer know about civil engineering to get sanctioned as an "engineer" (I know this happens, but it doesn't make it very right; there even many different kinds of EE and so on...). Heck, when I was at Microsoft, we were subjected to yearly generic ethic training that was very difficult for us researchers to relate to.
I would prefer some kind of universal professional ethics and responsibilities that wouldn't be related to your specific profession. Heck, being ethical applies to mathematicians, economists, journalists, musicians, programmers, engineers, and so on.
This policy isn't meant to be enforceable; it's to signal to contributors — who may not have even considered the legal implication of using LLM codegen tools — that LLM codegen is not allowed because of the legal risk. It's an addendum to the existing guideline "Do not commit tainted code to the repository", which was already practically unenforceable, and it's intended to address a real and new phenomenon.
People not associated with the leadership of the project shouldn't be making decisions about what licensing models are acceptable or not in the first place.
Yeah, I like it when people I work with, directly or indirectly, play fair. When they don't, I feel I have every right to criticize them for it, which is what I've been doing.
I believe Microsoft is doing the web (and me, as a developer for the web) a disservice, and I'd like them to stop damaging the web (and me).
It's your prerogative. Nobody said anything about "exfiltrating." Showing someone X doesn't mean that X isn't owned by its owner anymore. These aren't semantic arguments.
You're going to have a hard time expressing what exactly is the violation when everyone has signed the same NDAs and CIIAs! Like if you work for Google, and I work for Google, you can fathom, well I can look at code you've written for Google.
Okay, now if I only "work" for Google, which like 200,000 people do, and they've signed the same exact agreements as you... It's your prerogative!
> criminally or civilly
It's your prerogative! You can leetcode instead!
I am not saying this is you. But there are many developers, arguably the majority, who complain tirelessly about the status quo of interviewing. And given one opportunity after another to change that status quo - doing the stuff Simon says, doing the stuff I am saying - they make no effort! They vamp about how it's impossible.
They think their social local mobile trip planning startup website code is sensitive. They think their 15 layers of Dagger dependency injections is sensitive. It's not. If you want to change the status quo from leetcoding, you're going to have to screen share a "diff" here or there, and show people concretely what the hell you've been doing for a year at BigCo or UnicornCo. I cannot predict the future and I cannot generalize, but in my experience, the likelihood of criminal, civil or even the far more realistic reputational and cultural repurcussion is extremely small.
All I am saying is, the reason this hasn't happened yet is because most people spend a year at BigCo and UnicornCo doing nothing of any substance. I mean maybe that's going to change. But it's really tough, nobody is being honest with how truly tremendously mismanaged and ZIRP-fueled large employers are. It hasn't been this way forever, but it has for at least 10 years, and it means some crazy things have happened in the job market that made no sense.
But for folks that are negative on both accounts, maybe they've just learned their lesson from decades of watching Microsoft take the low road over and over again.
reply