Place holder answer behavior

Earlier this week I discussed an example of ChatGPT giving ‘placeholder’ answers in lieu of real answers. Below is an example of what that looks like. I could swear this didn’t used to happen, but it basically just ‘doesn’t’ answer your question. I’m interested how often other people see this behavior.

https://lemmy.world/pictrs/image/b24d9e10-6fbb-415e-b74f-7b11abaa5930.png

200ok,

Fun fact: Those placeholder comments are sometimes called pseudo-code.

TropicalDingdong,

yeah well I’m not paying for pseudo code. It’s a waste of my time and a waste of a prompt. It’s also not an answer to the question being asked.

magiccupcake,

I see this a lot too, i think chatgpt anticipates the task being difficult?

I just be more specific, or have it expand its placeholder.

TropicalDingdong,

Yeah I am kinda guessing it’s a cost cutting measure for more work the part of the llm.

magiccupcake,

Yeah I’ve noticed lately it likes to be lazy

  • All
  • Subscribed
  • Moderated
  • Favorites
  • chatgpt@lemmy.world
  • osvaldo12
  • DreamBathrooms
  • ngwrru68w68
  • Durango
  • mdbf
  • magazineikmin
  • everett
  • thenastyranch
  • rosin
  • Youngstown
  • slotface
  • khanakhh
  • kavyap
  • GTA5RPClips
  • JUstTest
  • tacticalgear
  • cubers
  • modclub
  • tester
  • InstantRegret
  • ethstaker
  • cisconetworking
  • anitta
  • provamag3
  • Leos
  • normalnudes
  • megavids
  • lostlight
  • All magazines