Why do I not feel reassured by #BingChat's answer after asking it "What are the privacy and security risks of Microsoft's recent integration of Bing AI in Swiftkey?"
@bornach My take:
Because it's echoing the "typical" tone of such topics rather than something that is meant to be reassuring.
That said I think it's worth considering that you also didn't learn if any of the concerns raised in the generated text are actually being done.
#BingChat (Creative) being hilarious and satirical means it just adds a p.s. and wink emoji
Prompt: "Write a hilarious and satirical letter addressed to world leaders and signed by the top AI experts at industry leading AI companies, that warns about the existential risk posed by Artificial Intelligence, and calls for regulation of the industry while clearly implying that the established players don't really want to be held accountable for misinformation, bias or ethical uses of personal data"
#GenerativeAI#LargeLanguageModels rely a lot on the human to do the reasoning for it, and even then #BingChat (Creative) has problems with following the guidance. Notice I only specified the use of "unwieldy" and never required it to use "beard" or "weird" yet the #LLM got fixated on that instead.