@eljefedsecurit@Lee_Holmes@adamshostack
Haven't tried #ChatGPT but I do sometimes get #BingChat caught in some kind of local minimum trying to improve non-working code it has written. It's typically a task for which there is sparsity of training data such as asking it to write code that draws a specific animal - I tend to just get a cat.
Why do I not feel reassured by #BingChat's answer after asking it "What are the privacy and security risks of Microsoft's recent integration of Bing AI in Swiftkey?"
#BingChat (Creative) being hilarious and satirical means it just adds a p.s. and wink emoji
Prompt: "Write a hilarious and satirical letter addressed to world leaders and signed by the top AI experts at industry leading AI companies, that warns about the existential risk posed by Artificial Intelligence, and calls for regulation of the industry while clearly implying that the established players don't really want to be held accountable for misinformation, bias or ethical uses of personal data"
#GenerativeAI#LargeLanguageModels rely a lot on the human to do the reasoning for it, and even then #BingChat (Creative) has problems with following the guidance. Notice I only specified the use of "unwieldy" and never required it to use "beard" or "weird" yet the #LLM got fixated on that instead.