Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

As a software engineer, all I have to do is think that my colleges wrote the auto pilot code, and that would keep me from ever turning the feature on.


sort by: page size:

Yep, the world isn't always black and white. If I were to use Copilot to autocomplete the code I would've written myself (which is VERY often the case) I don't think it'd fall under this policy unless made overly obvious in some kind of rebellion act

I'd think we'd have already seen this with GitHub Copilot. There was an interview I was part of late last year where the candidate had Copilot turned on during the live coding, and he didn't turn it off even after it was obvious that's what he was using. What I was more surprised by is how I thought this was a bigger deal than everyone else. Like, why come up with these elaborate tests when the candidate is just going to use autocomplete the whole time?

Maybe programming in another few years will just be glorified autocomplete and little more.

And perhaps testing people on how to write code was a mistake to begin with. It's one thing to write code, but reading code is another.


I thought it'd just be logistics. This is highly specialized OS code and I barely trust Copilot to do more than the most simple autocompletes. No way 99.9% of AI gen would pass a proper peer review.

I'm surprised that so much of the discussion around Copilot has centered around licensing rather than this.

You're basically asking a robot that stayed up all night reading a billion lines of questionable source code to go on a massive LSD trip and then use the resulting fever dream to fill in your for loops.

Coming from a hardware background where you often spend 2-8x of your time and money on verification vs. on the actual design, it seems obvious to me that Copilot as implemented today will either not provide any value (best case), will be a net negative (middling case), or will be a net negative, but you won't realize that you've surrounded yourself with a minefield for a few years (worst case).

Having an "autocomplete" that can suggest more lines of code isn't better, it's worse. You still have to read the result, figure out what it's doing, and figure out why it will or will not work. Figuring out that it won't work could be relatively straightforward, as it is today with normal "here's a list of methods" autocomplete. Or it could be spectacularly difficult, as it would be when Copilot decides to regurgitate "fast inverse square root" but with different constants. Do you really think you're going to be able to decipher and debug code like that repeatedly when you're tired? When it's a subtly broken block of code rather than a famous example?

That Easter example looks horrific, but I can absolutely see a tired developer saying "fuck it" and committing it at the end of the day, fully intending to check it later, and then either forgetting or hoping that it won't be a problem rather than ruining the next morning by attempting to look at it again.

I can't imagine ever using it, but I worry about new grads and junior developers thinking that they need to use crap like this because some thought leader praises it as the newest best practice. We already have too much modern development methodology bullshit that takes endless effort to stomp out, but this has the potential to be exceptionally disastrous.

I can't help but think that the product itself must be a PSYOP-like attempt to gaslight the entire industry. It seems so obvious to me that people are going to commit more broken code via Copilot than ever before.


Now if only someone could figure out a magic word that would stop Copilot from being trained on my code.

That's why everybody programs in base notepad, with no syntax highlighting or auto complete or live error notifications or linting or any other quality of life feature.

What you say here is true. Writing code is not remotely the hardest part of software engineering. But that does not mean that there is zero value in making it easier.

Shifting gears is not remotely the hardest part of safely driving a car. Yet automatic transmissions are a nice feature for tons of people.

I do not understand this dismissal based on the fact that copilot does not completely revolutionize software engineering in a way that no other product has ever come close to doing.


Who cares though? Just use an IDE or Copilot or something.

That's basically where my gut went when I read the headline - so is that of a junior engineer, or really any engineer who hasn't had to think about it, and we don't promote their code directly to prod, either (if we can avoid it).

Copilot shouldn't be able to generate code destined for prod without review any more than should any line of code written by a human.


To me it's an added level of attention required, having to be extra careful (as I'd be when reviewing a junior's code) with any code suggestion copilot suggests... even if it's wrong 10% of the time, I need to be on the look for lines that would just cause a bug down the line. Ended up disabling it.

> If you allow students to use copilot, you're handing out certificates or diplomas to people who can't code.

I've given a couple hundred technical interviews over the last 10 years. This has been the case for a long, long time—even from fancy colleges.


Mine has. I work for a big tech company, and the team which approves / denies third-party software denied us to use Copilot. The given reason was that we don't want it to use our code to train its algorithms. I'm not sure how true that is, but it's what we have been told at the moment.

Yeah i am gonna even judge anyone who says copilot does not make them productive. Like what code could you possibly be writing that copilot is not autocompleting you properly? Yeah if you dont know what to write then copilot cant auto complete you.

This is my primary reason, after IP abuse, of disdaining Copilot. We’re still going to need engineers, and I’d rather us make our languages and tooling more expressive than turn into a prompt engineer.

My favorite part of Copilot is when it auto-completes a call to a function that does exactly what I need to do, like magic!

Except that function doesn't exist and never did.

LLMs don't know what they don't know, so they just make something up because they have to say something. The danger is that most people don't understand that's how they work and don't known when to call BS.

This is where I think companies have a responsibility. To ensure that _every_ response has a disclaimer that the answer from their AI could be right or completely wrong and it's up to the user to figure that out, because AI can't at the moment.


I don't use Copilot and know a handful of Fortune 50 companies that won't use it either. Developers do silly things like hard coding credentials when they're testing, as a very basic example, and it's hard to imagine how this is safe.

I think the point is if you cannot/do not know how to code you cannot confirm what co-pilot is doing. Especially when it comes to complex topics like drawing context from natural online language using machine learning.

100% agree. I have a coding job and although co-pilot comes in handy for auto completing function calls and generating code that would be an obvious progression of what needs to be written, I would never let it generate swaths of code based on some specification or even let it implement a moderately complex method or function because, as I have experienced, what it spits out is absolute garbage.

copilot is just a prototype. imagine in 10-20 years, software engineers as we know it will be obsolete.

And if you were given an IDE from a young age with copilot you also wouldn't be told you needed to try it...
next

Legal | privacy