Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Ok I tried it to create snippets of code for some obscure cases. ChatGPT passed with flying colors. Gemini kept referring me to documentation and regurgitating how I should consult it. It plainly sucks.


sort by: page size:

It’s yet another needless and error-prone abstraction that only works if it’s able to read human written code in the first place. I’d say it’s a pretty uninteresting, possibly dangerous, and ultimately distracting abstraction. My experience with ChatGPT was that it couldn’t ever tell me something that was actually correct, and I was asking it straightforward questions.

I've been using ChatGPT to provide output for very basic stuff, it's 50/50 if it works for the basic stuff. And 100% fails on complex stuff.

It's super useful for finding out how things work like asking it how to use an SDK to do something it works really well for it even provides stuff I searched the docs repeatedly for.

But for complex logic that I want to be too lazy to do, it literally just wastes my time. I tried for an hour to get it to do something and tried to modify the code to get it to work. Ended up just doing my own version completely.


Every time I've tried chatgpt I've been shocked at the mistakes. It isn't a good tool to use if you care about correctness, and I care about correctness.

It may be able to regurgitate code for simple tasks, but that's all I've seen it get right.


I've tried a couple of time using ChatGPT on a coding assignment (because.... if I can NOT do it, better right?) and both times I got garbage and ended up doing the coding assignment myself.

I used it and then immediately stopped using it. The experience is even worse than just copy pasting code into chatgpt on your own.

Yep. ChatGPT is like having a junior engineer confidently asking to merge broken garbage into your codebase all the time. Adds negative value for anyone that knows what they’re doing.

Can you expand on your statement that ChatGPT cannot write code? That doesn't match my experience with it.

ChatGPT can pretty easily generate documentation from code though :p

As someone who tried to feed a 50 line code base into ChatGPT only to watch it fail with very hard errors (no I don't mean it answered incorrectly, it crashed), I am not very impressed.

chatGPT is an amazing tool. It has really increased the speed of my coding. Why do you hate it?

I usually code in more estoteric bits of tech and problems which I didn't expect ChatGPT to do well in but I tried it at work on a standard backend stack (Java, Ivy, Ant) and it was absolutely _terrible_. It kept making stuff up and then I kept correcting it. I cannot understand how people are using it for work?!

I've had very similar experience. After seeing all the hype of people claiming they have written all kinds of interesting things in code with ChatGPT I gave it a try.

Was never able to get it do what I wanted. It often seemed to make calls to non existent browser functions. I would tell it that function didn't exist, it would then rewrite it again, but still wouldn't be exactly correct.

Sometimes it was useful for doing something I've never explored, as I could get hints for how I might do it, but the accuracy was terrible.


And herein lies the issue with ChatGPT, it can generate functioning code, but can also lie through its none existent teeth about it. Using ChatGPT (or Co-Pilot) can feel like pair-programming with a very talented developer who loves to bullshit.

Had an interview with a company the other day, I stated that I loved using ChatGPT to view problems from another perspective and to quickly write boilerplate code. They ended up hiring me, so I guess they didn't mind? At the end of they day it's just another tool, exactly like your IDE's autocompletion.

ChatGPT gives me garbage code unless I ask it politely not to. No joke. Usually the first attempt is pure garbage and I have to call it out as such and then it’s like, “you’re right! Here’s the updated code”. No idea why it can basically never get it right the first time. I also find that it can be quite redundant and offer two distinct solutions morphed into one mutant answer which will turn the undiscerning 1x developer into a -10x developer. But hey, it still saves me time. Sometimes..

I’ve been using ChatGPT to do my job for a couple of months now. You have to be very specific with requirements you give it, and you do have to test the code of course, but at worst it gets me 90% of the way there in seconds instead of the hours or days it would take me to write that code by hand.

Eventually, everyone is going to know how effective it is at writing code, and I think companies will expect ~10x the output from developers for the same salary and same 40-hour workweek. It’s disheartening.


They gave us the go-ahead at work last week so I tried to use ChatGPT today on a task. It failed, and failed, and failed, over and over again. I kept prompting it, telling it where it was wrong, and why it was wrong, and then it would say something trite like, "Got it! Let's update the logic to reflect $THING_I_SAID_TO_IT..." and then it would just regurgitate the exact same broken code over and over again.

It was an absolute waste of my time. I quite like it when I need to get some piece of weird information out of it about the Python standard library or whatever but whenever I try to get it to do anything useful with code it fumbles the bag so hard.


My experience is that chatgpt consistently showed me API functions that don’t exist. But I tend to do very obscure things in embedded, that googling for can be tricky or impossible. At first I had high hopes, because the shape of the code is good, and the logic looks good, but when I go to look up details of the parts I actually need help with, they’re just made up functions, like placeholders I might write until I can go back and implement them.

Where it’s shined for me was writing code where I couldn’t be bothered to put together ideas from several distinct tutorials. It’s fantastic for writing code that you can basically google for, even if you have to read several disperate pages to get a clear picture of what to do.


I have tried to get useful code out of ChatGPT with GPT-4 and so far been unable to find a case where it was easier for me to verify that chatGPT was right than to just do it myself. This was for toy examples. For more complicated things it's downright useless.

Everyone seems to be raving about its ability to write code. What am I doing wrong?

next

Legal | privacy