aral, to random
@aral@mastodon.ar.al avatar
kellogh, to random
@kellogh@hachyderm.io avatar

Does anyone have experience + feedback with using LLMs on your technical docs as a customer-facing help chatbot? How effective is it? What are the caveats?

hirad, to llm
@hirad@hirad.it avatar

I downloaded and tried Dolphin 2.7 Mixtral 8x7b today. So far, it's performing well. It's not too slow, averaging around 7.5 tokens per second. Tomorrow, I plan to test its performance in different topics. Perhaps I should also compare it to the other models I've tried.

mnl, to random
@mnl@hachyderm.io avatar

+ 🚨

mjgardner, to ChatGPT
@mjgardner@social.sdf.org avatar

“Users speak of as ‘hallucinating’ wrong answers — make stuff up and present it as fact when they don’t know the answer. But any answers that happen to be correct were ‘hallucinated’ in the same way.” — @davidgerard, https://davidgerard.co.uk/blockchain/2023/06/03/crypto-collapse-get-in-loser-were-pivoting-to-ai/

” “

kellogh, to llm
@kellogh@hachyderm.io avatar

does anyone out there know of an in-process vector DB that runs in the same process as the application? like SQLite but for vector stores?

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

Is anyone else in the “trough of disillusionment” part of the Generative AI hype cycle yet? The stuff they output is not good. It takes longer to fix it than writing articles from scratch, IMHO.

Maybe it's a function of caring about the results.

https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

schizanon, to ChatGPT
@schizanon@mas.to avatar

It seems to me that the main problem with and other is context. Each new conversation with them is a clean slate and the longer a conversation goes on the slower and more confused they seem to get. I presume taking the context into account means extra processing time, and storage on their part, but moreover they just don't provide a very good interface for communicating with the about a long-lived project. This is critical for .

kegill, to ukteachers
@kegill@mastodon.social avatar

The question isn't "should we" integrate generative AI into the classroom ... the question is "how".






Using algorithmic tools (“AI”) like ChatGPT and MidJourney in the classroom: a guide for teachers

https://wiredpen.com/2023/09/20/using-algorithmic-tools-ai-like-chatgpt-and-midjourney-in-the-classroom-a-guide-for-teachers/

gabrielesvelto, to llm
@gabrielesvelto@fosstodon.org avatar

Am I the only one to think that this article (and the cited research) is basically stating the obvious?

https://arstechnica.com/information-technology/2023/09/ai-language-models-can-exceed-png-and-flac-in-lossless-compression-says-study/

An has access to an enormous amount of data - a dictionary if you will - which makes it absolutely obvious that it can beat existing compression algorithms. If you have access to an enormous dictionary on the decompression side you can achieve much better compression by looking up large chunks of data there instead of storing them.

senficon, to llm
@senficon@ohai.social avatar

I’m always baffled by the suggestion that an could write my articles for me. That only works if plenty of people have written a very similar article already, which begs the question why I should write it in the first place.

shermozle, to llm
@shermozle@aus.social avatar

Spicy Autocomplete is such a great putdown for .

Chapeau @pluralistic

(Context: https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space )

fullfathomfive, to ai
@fullfathomfive@aus.social avatar

A lot of people have responded to my Duolingo post with things like "Never work for free," and "I would never donate my time to a corporation.” Which I completely agree with.

But here's the thing about Duolingo and all of the other companies like it. You already work for them. You just don’t know it.

On Duo, I thought I was learning a language. Participating in the community by helping other learners and building resources seemed like part of the process.

Luis Von Ahn, the CEO of Duolingo, was one of the creators of CAPTCHA, which was originally supposed to stop bot spam by getting a human to do a task a machine couldn’t do. In 2009 Google bought CAPTCHA and used it to get humans to proofread the books they were digitising (without permission from the authors of those books btw). So in order to access much of the web, people had to work for Google. Most of them didn’t know they were working for Google - they thought they were visiting websites.

This is how they get you. They make it seem like they’re giving you something valuable (access to a website, tools to learn a language), while they’re actually taking something from you (your skills, your time, your knowledge, your labour). They make you think they’re helping you, but really you're helping them (and they’re serving you ads while you do it).

Maybe if people had known what CAPTCHA was really for they would’ve done it anyway. Maybe I still would’ve done all that work for Duo if I’d known it would one day disappear from the web and become training data for an LLM ...

... Or maybe I would’ve proofread books for Project Gutenberg, or donated my time to citizen science projects, or worked on an accessibility app, or a million other things which genuinely improve people’s lives and the quality of the web. I didn’t get an informed choice. I got lured into helping a tech company become profitable, while they made the internet a shittier place to be.

How many things are you doing on the web every day which are actually hidden work for tech companies? Probably dozens, or hundreds. We all are. That’s why this is so insidious. It’s everywhere. The tech industry is built on free labour. (And not just free – we often end up paying for the end results of our own work, delivered back to us in garbled, enshittified form).

And it’s a problem that’s only getting worse with AI. Is that thoughtful answer you gave someone on reddit or Mastodon something that will stay on the web for years, helping people in future with the same problem? Or is it just grist for the LLMs?

Do you really get a choice about it?

ai6yr, to ai

VentureBeat: "ChatGPT goes ‘off the rails’ with gibberish answers" https://venturebeat.com/ai/chatgpt-goes-off-the-rails-with-gibberish-answers/

danilo, to llm
@danilo@hachyderm.io avatar

AI haters can feel how they need to feel, but I spent the day prototyping the most complex web thing I have ever built

and despite the brain fog, limited energy and low motivation endemic to this time of year...

it's WORKING

One of my biggest technical questions is already de-risked, and from this foundation I can iterate and clarify the product vision

An is a neuroprosthesis.

ianRobinson, to Podcast
@ianRobinson@mastodon.social avatar

Listening to The Ezra Klein Show (What if Dario Amodei Is Right About A.I.?): https://nytimes.com/2024/04/12/opinion/ezra-klein-podcast-dario-amodei.html

Back in 2018, Dario Amodei worked at OpenAI. And looking at one of its first A.I. models, he wondered: What would happen as you fed an artificial intelligence more and more data? He and his colleagues decided to study it, and they found that the A.I. didn’t just get better with more data; it got better exponentially.

forteller, to llm
@forteller@tutoteket.no avatar

Does anyone know of an LLM that I can easily run locally on my Linux PC to do simple text cleanup tasks, like removing unnecessary line breaks and the same word twice in a row, capitalize names, etc?

luciedigitalni, to random
@luciedigitalni@aus.social avatar

"But first, it’s helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent."


https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

matthewskelton, to llm
@matthewskelton@mastodon.social avatar

"the real-world use case for large language models is overwhelmingly to generate content for spamming"

Excellent article by Amy Castor

https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

oregonmiles, to llm
@oregonmiles@babka.social avatar
cigitalgem, to ML
@cigitalgem@sigmoid.social avatar

Have a look at the Usenix login; interview featuring myself and the BIML LLM work.

https://berryvilleiml.com/2024/03/15/rik-farrow-interviews-mcgraw-for-login/

deirdresm, to ai
@deirdresm@hachyderm.io avatar

This is the single best explanation (long!) I've read about why LLMs are a con. Great piece from @baldur.

https://softwarecrisis.dev/letters/llmentalist/

chikim, to llm
@chikim@mastodon.social avatar

Mark Zuckerberg on Llama 3: Apparently Meta stopped training Llama-3-70b before convergence and decided to move onto Llama-4. Meaning they could have kept training and made it smarter! Also llama3-70b multimodal as well as multilingual and bigger context window are coming. https://youtu.be/bc6uFV9CJGg

ai6yr, to ai

Besides "AI is cool" Adobe's $4.99/month AI for Adobe Acrobat does what for me, exactly?

KathyReid, to ai
@KathyReid@aus.social avatar

You know the Rabbit R1 device was created by a bunch of male software nerds, because they named an electronic device "rabbit", non-ironically.

🐰

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • thenastyranch
  • everett
  • DreamBathrooms
  • ethstaker
  • magazineikmin
  • cubers
  • Youngstown
  • tacticalgear
  • Durango
  • slotface
  • ngwrru68w68
  • kavyap
  • provamag3
  • osvaldo12
  • InstantRegret
  • cisconetworking
  • GTA5RPClips
  • modclub
  • tester
  • mdbf
  • khanakhh
  • normalnudes
  • Leos
  • megavids
  • anitta
  • lostlight
  • All magazines