Place holder answer behavior
Earlier this week I discussed an example of ChatGPT giving ‘placeholder’ answers in lieu of real answers. Below is an example of what that looks like. I could swear this didn’t used to happen, but it basically just ‘doesn’t’ answer your question. I’m interested how often other people see this behavior.
https://lemmy.world/pictrs/image/b24d9e10-6fbb-415e-b74f-7b11abaa5930.png
Add comment