They are completely missing the point of their product. #ChatGPT and other #ai#chatbots are not our friend, and we don’t want them to be!
(If you spend a lot of time „talking to“ chatbots, please go outside where people are. Seriously!)
It is a tool we use for super annoying tasks —write an ALT-tag for an image, fix my shell script, translate something. And we still have to double check everything it does. We can’t and we won’t trust it. (1/2)
"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)
"Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University–led research. The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation."
'Artificial intelligence chatbots are facing more scrutiny after an investigative report showed how they can be easily used to devise deceptive election campaigns. The report, conducted by Nieuwsuur in collaboration with AI Forensics, revealed how chatbots from tech giants Google and Microsoft provided strategies aimed at manipulating voters during the upcoming European Parliament elections.'
#AI#GenerativeAI#SLMs#Microsoft#ChatBots#Phi3: "How did Microsoft cram a capability potentially similar to GPT-3.5, which has at least 175 billion parameters, into such a small model? Its researchers found the answer by using carefully curated, high-quality training data they initially pulled from textbooks. "The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data," writes Microsoft. "The model is also further aligned for robustness, safety, and chat format."
Much has been written about the potential environmental impact of AI models and datacenters themselves, including on Ars. With new techniques and research, it's possible that machine learning experts may continue to increase the capability of smaller AI models, replacing the need for larger ones—at least for everyday tasks. That would theoretically not only save money in the long run but also require far less energy in aggregate, dramatically decreasing AI's environmental footprint. AI models like Phi-3 may be a step toward that future if the benchmark results hold up to scrutiny.
Phi-3 is immediately available on Microsoft's cloud service platform Azure, as well as through partnerships with machine learning model platform Hugging Face and Ollama, a framework that allows models to run locally on Macs and PCs."
Lots of people who work in #AI have, in their head, an idea about what sort of interaction with an #LLMmight give them pause. The thing that might make them start to suspect that something interesting is happening.
Here's mine:
User: Tell me a cat joke.
LLM: Why did the cat join a band? He wanted to be a purr-cussionist.
#AI#GenerativeAI#ChatBots#Healthcare#WHO: "The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.
SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.
'After reports last week that a New York City chatbot powered by generative artificial intelligence was providing residents with incorrect information on a range of topics — from housing rights to worker protections — Mayor Eric Adams has said he doesn’t intend to take the tool offline.'
"An artificial intelligence-powered chatbot meant to help small business owners in New York City has come under fire for dispensing bizarre advice that misstates local policies and advises companies to violate the law"
Dark Visitors - A List of Known AI Agents on the Internet
Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.
#AI#GenerativeAI#Research#Science#Chatbots#LLMs: "This new review, led by William Agnew, who studies AI ethics and computer vision at Carnegie Mellon University, cites 13 technical reports or research articles and three commercial products; all of them replace or propose replacing human participants with LLMs in studies on topics including human behavior and psychology, marketing research or AI development. In practice, this would involve study authors posing questions meant for humans to LLMs instead and asking them for their “thoughts” on, or responses to, various prompts.
One preprint, which won a best paper prize at CHI last year, tested whether OpenAI’s earlier LLM GPT-3 could generate humanlike responses in a qualitative study about experiencing video games as art. The scientists asked the LLM to produce responses that could take the place of answers written by humans to questions such as “Did you ever experience a digital game as art? Think of ‘art’ in any way that makes sense to you.” Those responses were then shown to a group of participants, who judged them as more humanlike than those actually written by humans."
Hello #GetFediHired ! I'm now independant and associated with a Cooperative of other independants, available to work with you for your Customer Service/Support/Experience/Success needs!
We published the third installment in our open educational resources, in Romanian, about #chatbots and the hidden dangers that lie in our interactions with these algorithms. The dangers are amplified by our growing sense of loneliness and isolation.
“We’re 15 months into the #AI revolution and yet nearly all that the “intelligent” #chatbots do is the same as on day 1: summarise documents, provide simple suggestions, write or rewrite documents, answer questions, write simple code”