Tried a number of differently worded "allegory" and "nursery rhyme" prompts for #BingChat (creative) but this "Aesop's fable" one seems to have yielded something on point. Again it doesn't seem to have done an Internet search to generate its output
Generative AI answers are essentially worthless without full attribution for the sources of the information. It's not just a matter of giving proper credit to those sources, but permitting users to easily click through for more details and especially to determine the veracity of the AI answers, both in total and in their individual parts.
An AI answer might be accurate in most respects, causing users to assume it's accurate in all respects, but be wrong about even one critical aspect of the answer that could result in injury or death to the person assuming 100% accuracy.
Disastrous. Not hyperbole. Will the firms take responsibility for damages done by errors in their AI answers?
@lauren do you like the way #Bingchat cites sources for its answers?
To me it very often exposes just how flimsy the sources can be. But it also offers rabbit holes to learn more. Anyway at least this level of citation seems totally reasonable to expect from any bot.
If I ask it to 'copyedit' a paragraph, it does a great job, making small tweaks like a spell checker 👍
But give it 3-4 paragraphs to rework, it changes the text so completely that my 'voice' is completely gone.
If I ask it to just compose something with a prompt, it's insipid silliness. I mean its reasonable, just so generic it offers no value. It's not just Bard, I've also tried #ChatGPT with the same effect.
@scottjenson I have found that the products which are specifically tuned to help you write are slightly better than the ones that are generic chat spaces; see the #BingChat compose mode in #MicrosoftEdge and also the Write tool from #DeepL.