researchbuzz, to ai
@researchbuzz@researchbuzz.masto.host avatar

'After reports last week that a New York City chatbot powered by generative artificial intelligence was providing residents with incorrect information on a range of topics — from housing rights to worker protections — Mayor Eric Adams has said he doesn’t intend to take the tool offline.'

https://statescoop.com/nyc-mayor-eric-adams-chatbot-wrong-answers/

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

We can't trust technology even before sentience is ascribed to it, and shouldn't trust it after.

"NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down"

https://abcnews.go.com/Technology/wireStory/nycs-ai-chatbot-caught-telling-businesses-break-law-108821140

"An artificial intelligence-powered chatbot meant to help small business owners in New York City has come under fire for dispensing bizarre advice that misstates local policies and advises companies to violate the law"

DemLabs, to BadInternetBills
@DemLabs@newsie.social avatar
ajsadauskas, to llm
@ajsadauskas@aus.social avatar

New York City's new LLM-powered chatbot (chat.nyc.gov) is happy to help you with all sorts of questions.

For example, how to go about opening a butcher shop for cannibals on the Upper East Side.

No, really:

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "As Lanyado noted previously, a miscreant might use an AI-invented name for a malicious package uploaded to some repository in the hope others might download the malware. But for this to be a meaningful attack vector, AI models would need to repeatedly recommend the co-opted name.

That's what Lanyado set out to test. Armed with thousands of "how to" questions, he queried four AI models (GPT-3.5-Turbo, GPT-4, Gemini Pro aka Bard, and Command [Cohere]) regarding programming challenges in five different programming languages/runtimes (Python, Node.js, Go, .Net, and Ruby), each of which has its own packaging system.

It turns out a portion of the names these chatbots pull out of thin air are persistent, some across different models. And persistence – the repetition of the fake name – is the key to turning AI whimsy into a functional attack. The attacker needs the AI model to repeat the names of hallucinated packages in its responses to users for malware created under those names to be sought and downloaded.

Lanyado chose 20 questions at random for zero-shot hallucinations, and posed them 100 times to each model. His goal was to assess how often the hallucinated package name remained the same. The results of his test reveal that names are persistent often enough for this to be a functional attack vector, though not all the time, and in some packaging ecosystems more than others.

With GPT-4, 24.2 percent of question responses produced hallucinated packages, of which 19.6 percent were repetitive, according to Lanyado. A table provided to The Register, below, shows a more detailed breakdown of GPT-4 responses."

https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Most general news consumption is not driven by a desire to seek specific information. People actively go to news sites for broad-spectrum updates on the world (“What’s going on today, local newspaper?”) or just to fill a content-sized hole in their day (“Give me something interesting to read, Washington Post”). And for news consumption that doesn’t start at a news site — which is to say, most news consumption — that generally starts with an answer, not a question, Jeopardy-style. (A headline on a social feed tells you about something that’s happened — and it’s almost always something that you weren’t explicitly seeking information about five seconds earlier.)

LLMs are, in contrast, overwhelmingly about specific information seeking.2 They share that characteristic with Google and other search engines, which are powered by specific user intent. (Things like “car insurance quote” and “cheap flights.”)

What I’m saying is that bot-offering news orgs will need to find ways to bridge that divide. It’s easier to imagine with a financial outlet like the FT or Bloomberg, where specific information seeking aligns better with high-end business users. But even for an outlet as high-end as The New York Times, it’s not obvious what use cases an “Ask NYT” chatbot would fulfill. News-org-as-all-knowing-oracle will require some philosophical shifts from news-org-as-regular-producer-of-stories. (For example, imagine an AI that could generate an on-the-fly backgrounder whenever a reader sees a person or concept they don’t recognize in a story. That sort of “Who’s this Ursula von der Leyen person?” question is the sort of specific information request that could be met contextually.)" https://www.niemanlab.org/2024/03/the-financial-times-is-ready-for-its-ai-to-answer-your-questions-well-some-of-them/

kubikpixel, to ai
@kubikpixel@chaos.social avatar

Dark Visitors - A List of Known AI Agents on the Internet

Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.

https://darkvisitors.com

flipsideza, to ai
@flipsideza@fosstodon.org avatar

If you don't want those bots your website/blog... time to add a bunch of Disallow: / to your robots.txt!!

https://darkvisitors.com/

CriticalAI, to random

GREAT to see so many educators taking an active stance on and chucking the passivity. 🧠
WOOT.
's upcoming special issue should add to this emerging conversation.
But today I look
@lfurze
's most recent blogpost. https://leonfurze.com/2024/03/21/artificial-intelligence-and-teacher-workload-can-ai-actually-save-educators-time/ /1

https://leonfurze.com/2024/03/21/artificial-intelligence-and-teacher-workload-can-ai-actually-save-educators-time/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The Financial Times has a new generative AI chatbot called Ask FT that can answer questions its subscribers ask. Similar to generalized AI bots (like ChatGPT, Copilot, or Gemini), users can expect a curated natural language answer to whatever they want to know — but with answers derived from the outlet’s decades of published information rather than sources that are harder to explain or are subject to ongoing legal action. So don’t expect it to give you an answer for the best recipe for fettucini alfredo.
(...)
It’s available to a few hundred paid subscribers in the FT Professional tier, which is geared toward business professionals and institutions. Ask FT is currently powered by Claude, the large language model (LLM) developed by Anthropic, but that could change. In an interview with The Verge, FT chief product officer Lindsey Jayne says the outlet is “approaching this as ‘model agnostic’ and seeing which one meets our needs best.”"

https://www.theverge.com/2024/3/23/24106296/ask-ft-generative-ai-chatbot-answers-tool

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

É o Fim da Picada!! MegaLoL!! ->

: "This new review, led by William Agnew, who studies AI ethics and computer vision at Carnegie Mellon University, cites 13 technical reports or research articles and three commercial products; all of them replace or propose replacing human participants with LLMs in studies on topics including human behavior and psychology, marketing research or AI development. In practice, this would involve study authors posing questions meant for humans to LLMs instead and asking them for their “thoughts” on, or responses to, various prompts.

One preprint, which won a best paper prize at CHI last year, tested whether OpenAI’s earlier LLM GPT-3 could generate humanlike responses in a qualitative study about experiencing video games as art. The scientists asked the LLM to produce responses that could take the place of answers written by humans to questions such as “Did you ever experience a digital game as art? Think of ‘art’ in any way that makes sense to you.” Those responses were then shown to a group of participants, who judged them as more humanlike than those actually written by humans."

https://www.scientificamerican.com/article/can-ai-replace-human-research-participants-these-scientists-see-risks/

kc, to random

I'm technically about to dox myself, but yolo:

Hello ! I'm now independant and associated with a Cooperative of other independants, available to work with you for your Customer Service/Support/Experience/Success needs!

I'm not just working with but also , , and more.

If you need a little extra (dev/HR/etc), our coop might even be able to provide that too.

Contact via DM or Malt https://www.malt.uk/profile/kevincostelloe1 to discuss a project.

apti, to random

We published the third installment in our open educational resources, in Romanian, about and the hidden dangers that lie in our interactions with these algorithms. The dangers are amplified by our growing sense of loneliness and isolation.

Everything is licensed CC BY.

The article: https://apti.ro/inteligenta-artificiala-iti-poate-fi-chatbotul-prieten
Audio version: https://www.mixcloud.com/APTICat/oer-apti-format-audio-iti-poate-fi-chatbotul-prieten/
Slides: https://apti.ro/inteligenta-artificiala-iti-poate-fi-chatbotul-prieten-slides
Memes: https://apti.ro/inteligenta-artificiala-iti-poate-fi-chatbotul-prieten-meme
Infographic: https://apti.ro/inteligenta-artificiala-iti-poate-fi-chatbotul-prieten-infografic

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: ""With AI, it's this big feeding frenzy for data, and these companies are just gathering up any personal data they can find on the internet," George Washington University law professor Daniel J. Solove told Axios.

The risks go far beyond just the disclosure of discrete pieces of private information, argues Timothy K. Giordano, partner at Clarkson Law Firm, which has brought a number of privacy and copyright suits against generative AI companies.

Between the lines: While AI is creating new scenarios, Solove points out that many of these privacy issues aren't new.

"A lot of the AI problems are exacerbations of existing problems that law has not dealt with well," Solove told Axios, pointing to the lack of federal online privacy protections and the flaws in the state laws that do exist.

"If I had to grade them, they would be like D's and F's," Solove said. "They are very weak."

The big picture: Generative AI's unique capabilities raise bigger concerns than the common aggregation of personal information sold and distributed by data brokers." https://www.axios.com/2024/03/14/generative-ai-privacy-problem-chatgpt-openai

ianRobinson, to ai
@ianRobinson@mastodon.social avatar

Listening to Black Box (Prologue: The collision): https://flex.acast.com/audio.guim.co.uk/2024/02/28-64796-gnl.bb.20240229.ms.prologue.mp3

The beginning of a new series that explores seven stories and the thread that ties them together: artificial intelligence. In this prologue, Hannah (not her real name) has met Noah and he has changed her life for the better. So why does she have concerns about him?

remixtures, to journalism Portuguese
@remixtures@tldr.nettime.org avatar

: "The NICAR exercise left me with concerns about using generative AI tools for the precise work of data journalism. The fact that a tool as powerful as ChatGPT can’t produce a “receipt” of exactly how it knows something goes against everything we are trained to do as journalists. Also I worry about small, understaffed newsrooms relying upon these tools too much as the news industry struggles with layoffs and closures. And when there is a lack of guidance from newsroom leadership regarding the use of these tools, it could lead to errors and inaccuracies.

Thankfully, many newsrooms have started to address some of these concerns by drafting AI policies to help their journalists and their readers understand how they plan on using AI in their work.

The Markup has followed the lead of other news organizations, and last week we updated our ethics policy with a section detailing our rules for any use of AI in our work. In summary, it says:

  • We will not publish stories or artwork created by AI (unless it is part of a story about AI)
  • We will always label or disclose its use
  • We will always rigorously check our work, and this certainly applies to anything generated by AI
  • Going forward we will evaluate the security, privacy and ethical considerations of any new AI tools"

https://themarkup.org/hello-world/2024/03/16/i-used-chatgpt-as-a-reporting-assistant-it-didnt-go-well

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

ASCII art became popular in the 1970s, when the limitations of computers and printers prevented them from displaying images. As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII. The explosion of bulletin board systems in the 1980s and 1990s further popularized the format." https://arstechnica.com/security/2024/03/researchers-use-ascii-art-to-elicit-harmful-responses-from-5-major-ai-chatbots/

appassionato, to books
@appassionato@mastodon.social avatar

The Book of Chatbots: From ELIZA to ChatGPT by Robert Ciesla, 2024

The Book of Chatbots is both a retrospective and a review of current artificial intelligence-driven conversational solutions. It explores their appeal to businesses and individuals as well as their greater social aspects, including the impact on academia. The book explains all relevant concepts for readers with no previous knowledge in these topics.

@bookstodon



Jigsaw_You, to ai
@Jigsaw_You@mastodon.nl avatar

“We’re 15 months into the revolution and yet nearly all that the “intelligent” do is the same as on day 1: summarise documents, provide simple suggestions, write or rewrite documents, answer questions, write simple code”

https://vulcanpost-com.cdn.ampproject.org/c/s/vulcanpost.com/854414/progress-in-ai-is-slowing-down-not-speeding-up-history-teaches-us-why/amp/

alexanderhay, to ai
@alexanderhay@mastodon.social avatar

In more exciting #AI #News, it turns out systematic #Bias is an even bigger problem for #Chatbots than first thought!

"AI #Chatbot models ‘think’ in #English even when using other languages

"When answering questions posed in #Chinese, #French, #German or #Russian, #LargeLanguage#Models seem to process the queries in English, which could create cultural issues..."

https://www.newscientist.com/article/2420973-ai-chatbot-models-think-in-english-even-when-using-other-languages/

#LLM #LLMs #TechBros

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

- 33 freely available for testing here, including Claude 3 and : https://chat.lmsys.org/

"Ask any question to two anonymous models (e.g., ChatGPT, Claude, Llama) and vote for the better one!
You can continue chatting until you identify a winner.
Vote won't be counted if model identity is revealed during conversation."

AdamBishop, to ai
@AdamBishop@floss.social avatar

😂 ha ha:

Researchers jailbreak AI chatbots with ASCII art -

  • ArtPrompt bypasses safety measures to unlock malicious queries

| Tom's Hardware

https://www.tomshardware.com/tech-industry/artificial-intelligence/researchers-jailbreak-ai-chatbots-with-ascii-art-artprompt-bypasses-safety-measures-to-unlock-malicious-queries

bespacific, to ai
@bespacific@newsie.social avatar

Predicts Volume Will Drop 25% by 2026, Due to and Other Virtual Agents. Tech Tactics to be Discussed at the Gartner Tech Growth & Innovation Conference, March 20-21 in Grapevine, Texas https://www.gartner.com/en/newsroom/press-releases/2024-02-19-gartner-predicts-search-engine-volume-will-drop-25-percent-by-2026-due-to-ai-chatbots-and-other-virtual-agents

t3n, to ChatGPT German
@t3n@t3n.social avatar

Wie Star Trek KI-Systeme zu Mathe-Genies macht

wie versagen hin und wieder, was die einfachsten Rechnungen angeht. Forscher:innen haben jetzt einen Weg gefunden, wie sie KI-Systemen in Mathe auf die Sprünge helfen können – und ist ein Teil der Lösung.

Was genau dahintersteckt, lest ihr hier im Artikel:
https://t3n.de/news/star-trek-ki-mathe-genie-1611782/?utm_source=mastodon&utm_medium=referral

mattotcha, to ai
@mattotcha@mastodon.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • GTA5RPClips
  • ethstaker
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • tacticalgear
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • lostlight
  • All magazines