rastilin,

What's with the massive outflow of scaremongering AI articles now? This is a huge reach, like, even for an AI scare piece.

I tried their exact input, and it works fine in ChatGPT, recommending a package called "arangojs", which, link, seems to be the correct package that's been around for 1841 commits. Which seems to be the pattern of "ChatGPT will X", and I try it, and "X" works perfectly fine with no issues that I've seen for literally every single article explaining how scary ChatGPT is because of "X".

Waltekin,
Waltekin avatar

What so many people don't understand: LLMs like ChatGPT are nothing but statistical engines. They break their incoming text into tokens, and see which tokens usually follow which others. When they generate output, they just roll the dice: After tokens A, B, and C, usually comes a D.

The point is: they have no understanding. If their training data included a good code example, they might regurgitate it. If their training data included broken code, they may regurgitate that. Or they could mix it all together and produce something weird. It's a lottery, based on what they sucked out of StackOverflow and other places.

Baketime,

I've found that I can't trust chapgpt to write much code, unless it's something very basic. Which can be useful for example if I want to see a "hello world" type code for something I haven't used before, or quickly build some kind of common template code.

But for anything more substantial I've tried I've really had to hold it's hand through the problem. Point out where it's wrong, tell it what part of the problem it's ignored, tell it what libraries/functions it's hallucinated. I mostly end up leading it through what I would have done anyway.

scarcer,
scarcer avatar

It's given incorrect and out of date recommendations before. It can also mix up information between two similar but independent libraries.

EqMinMax,

You guys even use chatGPT!! weird

Seriously, its a most hyped trash technology of 21st century.

Spy,

ChatGPT and similar LLMs don't really "know" anything. They can only predict what the answer should look like. This means that they can't be trusted for much and their answers should be reviewed before used, because anything they produce will sound correct by default.

johnkbin,

Tried getting chatGPT to write a simple python json parser and it butchered it. After a few attempts I gave up and just read the documentation lol

activepeople,

and the devs copy+pasting code from it probably are aware of that it doesn't know anything, and that it is likely synthesizing something based on StackOverflow, which they used to happily copy+paste from a few months ago.

If the libraries ChatGPT suggests work ~80% of the time, this leaves an opportunity for someone to provide a "solution" the other 20%.

eddythompson,

This is not my problem though.

When I get code from ChatGPT/Copilot there is always the option that's a complete hallucination. When there is a change by a human, I assume they at the very least weren't hallucinating code. They can still be wrong of course, but I just don't think "oh, this is all probably nonsense"

Haus,
Haus avatar

This is pretty much my experience. It did a pretty good job with the grunt work of setting up a Qt UI in python, but something like 5/20 imports were wrong.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • programming
  • DreamBathrooms
  • magazineikmin
  • ethstaker
  • khanakhh
  • rosin
  • Youngstown
  • everett
  • slotface
  • ngwrru68w68
  • mdbf
  • GTA5RPClips
  • kavyap
  • thenastyranch
  • cisconetworking
  • JUstTest
  • cubers
  • Leos
  • InstantRegret
  • Durango
  • tacticalgear
  • tester
  • osvaldo12
  • normalnudes
  • anitta
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines