As I'm teaching this term for the first time since the #AI#chatbot revolution, I feel I need to add a course policy on use of generative AI for coursework. Here's what I've come up with...
If you want to know why people don't trust #OpenAI or Microsoft or Google to fix a broken faux-#AGI#chatbot#LLM, consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing "#AI"-based suicide prevention tools.
(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)
So, Chair Khan, are we going to finally do something about manufacturers dangerously exaggerating the capabilities of #AI-equipped automated driving systems?
Or nah?
Because this has been going on for nearly 10 years now.
You know... one almost has to admire the absurdity of how a goddamn #chatbot finally blew the top off of primordial safety issues that have long-existed in the "AI" industry.
Est-ce que vous connaissez des coins d'internet où on peut trouver fictions interactives, genres des livres dont vous êtes le héros qui auraient été numérisés mais dans un format structuré qui permette à un programme de parcourir le contenu ? L'idée serait de faire un robotchat qui affiche un paragraphe, lance un sondage, les membres du salon votent pour choisir le paragraphe. Ce serait pour revivre la saga Loup Solitaire en coop.
It was not the students' use of a #ChatBot that was the problem, but they were using material found on the internet that itself was created by a hallucinating ChatBot and published without verification!
This is a type of model collapse we will be dealing with not just at universities in the near future.
Die Zahl psychischer Erkrankungen in Deutschland steigt - die Wartelisten für Therapieplätze sind lang. Können KI-Chatbots, die Betroffene beraten, hier helfen? Und was sind die Risiken? Von Lara Kubotsch.
Was ist eine Fraktion? Wie wird Wahlbetrug verhindert? Wann ist die Europawahl? Wenn ihr bei den Antworten zu diesen Fragen unsicher seid, dann fragt doch unseren WahlBot! Einfach Frage eingeben und anfangen zu chatten: https://kurz.bpb.de/wahlbot
So I just used Google's #Bard#chatbot for the first time. #School and #education are now completely different for our kids. We're going to need to change the ENTIRE system immediately. I just had Bard create a 5 paragraph, cited report about the Great Pyramid -- something that took my 6th grader two weeks to research and write -- and it's perfect. Took 5 minutes for me to figure out how to use and prompt it to get what I was looking for: a way better version of what he will be turning in. The traditional #English and/or #Writing coursework should be considered dead from a functional aptitude perspective. Those subjects should be associated/grouped with #art electives. Take that time in #class and dedicate all of it to teaching them about how the #internet and #ai works.
De Britse tak van bezorgdienst #DPD heeft de Kunstmatige Intelligentie (#AI)-functie van de chatbot op zijn website offline gehaald. Een gefrustreerde klant liet de #chatbot met behulp van AI een gedicht schrijven over de in zijn ogen slechte service van de bezorgdienst. Het gedicht kreeg duizenden likes op X.
"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)
LinkedIn announces a GPT-4-powered #AI#chatbot aimed at being a #"job seeker coach", available to Premium users, and says the platform now has 1B+ members.
Of course the airline is liable if a chatbot gives customers bad information - the same as if an employee sticks a lower price on an item by mistake, or a sale sign is posted too early, or a price scanner makes an error. Arguing otherwise is ridiculous.
Better testing pre-deployment might have helped prevent this, but there's no guarantee. LLMs may not be human, but they can be unpredictable and imperfect. https://wapo.st/49m97Ta #chatbot#chatbots#LLM#LLMs#GenAI
Gosh, #ChatGPT is really outstanding when compared to Google's Bard and Meta's Llama 2!! The subtleness, accuracy, and reasoning of OpenAI's #chatbot is years ahead of the rest of the pack. It's just great. #AI#GenerativeAI
I think people underestimate what the impact will be of an AI companion, at the current GPT4 level , that you can talk with without a lag of several seconds, that you can personalise, that has memory etc. GPT5 isn't necessary to change the landscape and impact of AI. #AI#ChatGPT#Chatbot
DuckDuckAssitant is back!
AI chatbots become more sycophantic as they get more advanced (archive.is)
If a person says they believe an objectively false statement, AIs tend to agree with them – and the problem seems to get worse as models get bigger