Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Someone committing poor quality LLM generated code and deeming it appropriate for review could create equally bad, if not worse, handwritten code. By extension, anyone who merges poor quality LLM code could merge equally poorly handwritten code. So ultimately it's up to their judgement and about the trust in the contribution process. If poor quality code ended up in the product, then it's the process that failed. Just because someone can hit you with a stick doesn't mean we should cut down the trees — we should educate people to stop hitting others with sticks instead.

"Banning LLM content" is in my opinion an effort spent on the wrong thing. If you want to ensure the quality of the code, you should focus on ensuring the code review and merge process is more thorough in filtering out subpar contributions effectively, instead of wasting time on trying to enforce unenforceable policies. They only give a false sense of trust and security. Would "[x] I solemnly swear I didn't use AI" checkbox give anything more than a false sense of security? Cheaters gonna cheat, and trusting them would be naive, politely said...

Spam... yeah, that is a valid concern, but it's also something that should be solved on organizational level.



view as:

Well, Torvalds says in the interview ‘we already have tools such as linters and compilers which speed up the work we do as part of software development’

I get the impression he agrees this road to LLM content is inevitable, but also kind of emphasises the role of the reviewer who takes the final decision.


Cheaters are gonna cheat, but filtering out the honest/shameless LLM fans is still an improvement. And once you do find out that they lied, you now have a good reason to ban them. Win/win.

Legal | privacy