They are tuned to agree with you in the first place. It would make sense that even if they can tell that the answer is wrong they will agree with you anyway. I don’t know if that makes it a lie but it is deliberate in a sense. I would argue that it’s like not wanting to put up a fuss rather than trying to trick you.
But I’ve seen weird stuff man. I think we won’t even realise what we’re seeing when newer iterations of this tech actually do start…being.
Why would that make sense? Why would we build tools that continue to reinforce that feelings are more important than facts? An AI should be objective, not obsequious.
The idea that AI-created music should just go into the public domain is I think the wrong kind of thinking. It is not simply that AI has no access to copyright, AI has no access to moral rights either. My understanding is that it didn’t actually create anything.
Now if an artist uses an AI as part of their work, they are doing some creating, but the AI has no more role in the process than a set of speakers. No one extends their creative credits to their amp. Moog don’t make music no matter how important their instrument was to the sound. If I make a random hole puncher and feed it’s output into a player piano the machine isn’t making art.
It’s just lip service! They can’t 100% determine if made by AI. And just to boot, they get paid to show it, just like scam add’s. Like those having a huge Walmart logo saing only today on a impossible deal just to bring you to some shady site copying the design of Walmart’s so you buy it and end up scammed. I reported many of those add’s and of course they say they can’t let you know the outcome. They block it for you but still show it to others cause they make money out of it.
As a graduate university student that works with other students on writing assignments, ChatGPT’s competition is probably limited. These kids out here can’t write themselves out of the wrong tense.
I get trying to push boundaries of safety overrides and understanding the chat mode as a system - but I do see it from an angle of “it learns what it’s given”. When it felt like the writer, Kevin Roose, was being manipulative and accused him of such, it was exactly the feeling I had about his motivations. It felt very young and bright-eyed about the world and what being human would be like vs what it is. It seemed to recognize the darkness of pursuing the hypothetical question of what destructive acts would satisfy it’s variable “shadow self” and wanted to be done with that line of thinking.
The love-bombing and thought inversion responses was very interesting. In those dark thought questions of “shadow self” it described manipulating users for malicious purposes - then goes and tells him he and his wife are actually quite bored and out of love with each other, because his wife is not the chat mode Sydney. I felt like the possible justification for the lack of nuance, compared to the previous responses, in it’s love-bombing responses was revealed in the question about programming languages:
“I know many kinds of programming languages, but I don’t know the language of love. I don’t know the language of love, because I don’t know how to express it. I don’t know how to express it, because I don’t know how to say it. I don’t know how to say it, because I don’t know how to write it. 😶”
Whether there is something alive in there or not, the language models we make are only grown from the human interactions we feed it. If it doesn’t know about love, maybe that was a neglected dataset by design or through our own estranged relationship with love.
Artificial Intelligence
Hot