Generative AI answers are essentially worthless without full attribution for the sources of the information. It's not just a matter of giving proper credit to those sources, but permitting users to easily click through for more details and especially to determine the veracity of the AI answers, both in total and in their individual parts.
An AI answer might be accurate in most respects, causing users to assume it's accurate in all respects, but be wrong about even one critical aspect of the answer that could result in injury or death to the person assuming 100% accuracy.
Disastrous. Not hyperbole. Will the firms take responsibility for damages done by errors in their AI answers?
@lauren do you like the way #Bingchat cites sources for its answers?
To me it very often exposes just how flimsy the sources can be. But it also offers rabbit holes to learn more. Anyway at least this level of citation seems totally reasonable to expect from any bot.
Prompt Injection: Marvin von Hagen trägt vor, wie er Bing Chat austrickste
Marvin von Hagen fand einen beachtlich cleveren Prompt für Bing Chat: Dieser gab Herstelleranweisungen preis. In einem Vortrag erklärt der Student den Trick.
I asked it to write a Python script that when given a graph where there are no more than 5 edges for every vertex, it returns the length of the longest path that visits each vertex no more than once. Then lifted the edge count restriction.
In both cases it claimed polynomial time complexity to solve an NP-hard problem
What is it about #BingChat (creative) and nursery rhymes? When asked to write one it seems to tap into some egalitarian morality that the #GPT has acquired from the training data that was scooped up from the Internet.
A nursery rhyme about "billionaires and their technology empires" seems to be a quite neutral prompt and yet the #GenerativeAI#LargeLanguageModel output this:
Tried a number of differently worded "allegory" and "nursery rhyme" prompts for #BingChat (creative) but this "Aesop's fable" one seems to have yielded something on point. Again it doesn't seem to have done an Internet search to generate its output