"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)
Was ist eine Fraktion? Wie wird Wahlbetrug verhindert? Wann ist die Europawahl? Wenn ihr bei den Antworten zu diesen Fragen unsicher seid, dann fragt doch unseren WahlBot! Einfach Frage eingeben und anfangen zu chatten: https://kurz.bpb.de/wahlbot
Asked #Copilot (formerly #BingChat) a familiar riddle but with numbers changed to make it impossible. It generated the same solution but substituting the numbers so that it ends up with the nonsense claim:
"I have an empty opaque bag. I put two apples and one banana in the bag. I either remove the banana or I remove one apple. I then remove all remaining fruits from the bag. Is it possible to tell what is in the bag now?"
I think people underestimate what the impact will be of an AI companion, at the current GPT4 level , that you can talk with without a lag of several seconds, that you can personalise, that has memory etc. GPT5 isn't necessary to change the landscape and impact of AI. #AI#ChatGPT#Chatbot
You already know not to take an AI chatbot seriously. But there may be reason to be even more cautious. New research has found that many AI systems have already started to deliberately present human users with false information. Science Alert explains why "AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception.” https://flip.it/ZbnJtj #Science#AI#ArtificialIntelligence#Chatbot#Tech
In another chapter of “what crazy things has AI done this week,” the Catholic advocacy group Catholic Answers has had to swiftly defrock the AI priest it unveiled just a few days ago, after some strange conversations were shared online.
In a particularly awkward exchange with @futurism regarding the legitimacy of his identity, the chatbot claimed to be a real member of the clergy, who lived in Assisi. “Yes, my friend,” Father Justin is reported to have said, “I am as real as the faith we share.”
Ever since Bruce Ruxton died in 2011, Australians have struggled to find a way to make ANZAC Day cringeworthy, manipulatively sentimental, insensitive, attention-grabbing, disrespectful, inauthentically mythologised, and downright embarrassing.
At last, we have all that and more — thanks to the Queensland government's Virtual Veterans AI guide to World War I. Hurrah to the Queensland Government and the brave techbros at TalkVia AI who merged the horrors of war and the horrors of an "AI" LLM chatbot. slow clap
Utilizing AI tools for composing messages to friends might not be the best choice, especially if the friend learns about the AI's involvement, recent research indicates. The study revealed that participants felt a fictional friend using AI for message crafting appeared less sincere in their efforts
Lots of people who work in #AI have, in their head, an idea about what sort of interaction with an #LLMmight give them pause. The thing that might make them start to suspect that something interesting is happening.
Here's mine:
User: Tell me a cat joke.
LLM: Why did the cat join a band? He wanted to be a purr-cussionist.
I asked a fully-censored pornocalypse-enabled commercial AI chatbot to write me a limerick about a man who ate a mango and... this isn't bad, actaully!
A fellow named Stan, quite the gourmand,
Bit into a mango, so grand!
The sweet, juicy flesh,
Made a tropical mesh,
And left sticky contentment in hand.
Researchers studying free speech have carried out a report analysing the six major chatbots, to find out whether they embrace international free speech standards.
As reported by @FastCompany, the team found that AI chatbots regularly censor topics that companies deem to be controversial, thanks to "vague and broad" policies.
As AI's integration into all areas of our lives increases, the researchers discuss the wider implications on the right to access information, when chatbots decide what content they will and won't generate.
On that CNET thing in the last boost, my first thought was "this is gonna make search even more useless" and… yeeeep "They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages"
One potentially informative thing reporters following up on that #NYC#AI#Chatbot story could do is #FOIA (or whatever the NY equivalent is) communications related to the acquisition and deployment. Who pushed for this in the first place? What did #Microsoft promise? What sort of quality / acceptance testing was done? Did anyone, anywhere along the line raise concerns that it would give out bad, potentially illegal advice?
💬 NYC’s government chatbot is lying about city laws and regulations | Ars Technica
「 To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination 」
Scientists Reveal Why Using ChatGPT To Message Your Friends Isn’t a Good Idea (scitechdaily.com)
Utilizing AI tools for composing messages to friends might not be the best choice, especially if the friend learns about the AI's involvement, recent research indicates. The study revealed that participants felt a fictional friend using AI for message crafting appeared less sincere in their efforts
DuckDuckAssitant is back!