You already know not to take an AI chatbot seriously. But there may be reason to be even more cautious. New research has found that many AI systems have already started to deliberately present human users with false information. Science Alert explains why "AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception.” https://flip.it/ZbnJtj #Science#AI#ArtificialIntelligence#Chatbot#Tech
In another chapter of “what crazy things has AI done this week,” the Catholic advocacy group Catholic Answers has had to swiftly defrock the AI priest it unveiled just a few days ago, after some strange conversations were shared online.
In a particularly awkward exchange with @futurism regarding the legitimacy of his identity, the chatbot claimed to be a real member of the clergy, who lived in Assisi. “Yes, my friend,” Father Justin is reported to have said, “I am as real as the faith we share.”
Ever since Bruce Ruxton died in 2011, Australians have struggled to find a way to make ANZAC Day cringeworthy, manipulatively sentimental, insensitive, attention-grabbing, disrespectful, inauthentically mythologised, and downright embarrassing.
At last, we have all that and more — thanks to the Queensland government's Virtual Veterans AI guide to World War I. Hurrah to the Queensland Government and the brave techbros at TalkVia AI who merged the horrors of war and the horrors of an "AI" LLM chatbot. slow clap
Lots of people who work in #AI have, in their head, an idea about what sort of interaction with an #LLMmight give them pause. The thing that might make them start to suspect that something interesting is happening.
Here's mine:
User: Tell me a cat joke.
LLM: Why did the cat join a band? He wanted to be a purr-cussionist.
I asked a fully-censored pornocalypse-enabled commercial AI chatbot to write me a limerick about a man who ate a mango and... this isn't bad, actaully!
A fellow named Stan, quite the gourmand,
Bit into a mango, so grand!
The sweet, juicy flesh,
Made a tropical mesh,
And left sticky contentment in hand.
Researchers studying free speech have carried out a report analysing the six major chatbots, to find out whether they embrace international free speech standards.
As reported by @FastCompany, the team found that AI chatbots regularly censor topics that companies deem to be controversial, thanks to "vague and broad" policies.
As AI's integration into all areas of our lives increases, the researchers discuss the wider implications on the right to access information, when chatbots decide what content they will and won't generate.
Echt jetzt, die dachten mittels KI Arbeit zu sparen und jetzt wird die Arbeitslosen zustellungen noch Teuer? Ist meiner Meinung nach nun nicht überraschend aber vielleicht widerspreche ich einfach immer deren Hoffnungen?! Es ist wohl keine Frage, welche Seite naiv so wie leichtgläubig ist!
»Österreichs AMS-Chatbot teurer – aber weniger Bias«
Asked #Copilot (formerly #BingChat) a familiar riddle but with numbers changed to make it impossible. It generated the same solution but substituting the numbers so that it ends up with the nonsense claim:
"I have an empty opaque bag. I put two apples and one banana in the bag. I either remove the banana or I remove one apple. I then remove all remaining fruits from the bag. Is it possible to tell what is in the bag now?"
💬 NYC’s government chatbot is lying about city laws and regulations | Ars Technica
「 To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination 」
Künstliche Intelligenz: Diese Excel-Tabelle zeigt euch, wie GPT- Modelle funktionieren
Wolltet ihr schon immer die Grundzüge einer KI verstehen? Dabei hilft euch jetzt Schritt für Schritt eine Excel-Tabelle der etwas anderen Art (und mit einer Größe von über 1GB)
I'm taking a grad school classes for my #MSIT degree at #PurdueGlobal - it is a remote class, so we are required to make posts on a forum, then post two responses to others.
Annnd, one of my cohort obviously used bad AI saying nothing to make his initial post summarizing an article.
"The implementation of the BI Agile starts from the start, which should be considered a key element of the successful BI project."
So, I posted on his thread, asking him to explain his summary.
But I'm a little mad about it. I'm working my butt off for an A, and this dude is pretty obviously using AI.
Another example from his summary: "Attestation carried out by practitioners, with long term professional expertise, will make possible the confirmation of effectiveness of this process in enterprise business environment that is continuously changing." 🙄
I asked him about a couple of his ridiculous theses in the forum, as if he wrote them.
The {tidychatmodels} #rstats 📦 “provides a simple interface to chat with your favorite AI chatbot from R. It is inspired by the modular nature of tidymodels where you can easily swap out any ML model for another one but keep the other parts of the workflow the same.” Current support for OpenAI, Mistral.ai, and Ollama. By Albert Rapp on GitHub https://tidychatmodels.albert-rapp.de/ #GenAI#chatbot#LLM#LLMs#AI@rstats
On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
One potentially informative thing reporters following up on that #NYC#AI#Chatbot story could do is #FOIA (or whatever the NY equivalent is) communications related to the acquisition and deployment. Who pushed for this in the first place? What did #Microsoft promise? What sort of quality / acceptance testing was done? Did anyone, anywhere along the line raise concerns that it would give out bad, potentially illegal advice?
DuckDuckAssitant is back!