happyborg, to Futurology
@happyborg@fosstodon.org avatar

I mean, wut?

User: make a list of things I might find you useful for

Llama: Sure, I'd be happy to help you with that. Please provide me with a list of things or tasks you would like assistance with and we can work together on them.

It took nearly a minute of hammering my eight CPU cores to come up with that. 🤷‍♂️

I think we're safe from for the foreseeable. Don't listen to twerps like who is just shilling.

thomasapowell, to ChatGPT
@thomasapowell@fosstodon.org avatar

Filter approaches couldn't secure shit web apps from injections 100%, why do people think it would work for !?! Wired has a very accessible article talking about just this https://www.wired.com/story/ai-adversarial-attacks/

Toxicity is a training fail. The ugly data in the model simply can’t be band-aided via a filter full stop. The REAL solution is to not have toxic training data to build the model which means no big quick web scrape train method, but rather quite a lot of hard work. Why not try that?

metin, to ai
@metin@graphics.social avatar

When generative AI is trained with AI-generated data, it becomes degenerat(iv)e AI.

kellogh, to ChatGPT
@kellogh@hachyderm.io avatar

this isn’t rocket science. if you put information in, it will come out. an exploit might not exist today, but it’s only a matter of time before it’s common. training data, prompts, RAG-injected info…all of it needs to follow basic security principles. https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html

bortzmeyer, to llm French
@bortzmeyer@mastodon.gougere.fr avatar

side meeting (so not an official meeting) "Discussion on the potential of LLMs in the networking domain" https://github.com/wangtz19/ietf-side-meetings

Judging from the badges, it seems specially popular in China?

ainmosni, to programming
@ainmosni@berlin.social avatar

So, my trial just expired, and while it did cut down on some typing, it also made me feel like the quality of my code was lower, and of course it felt dirty to use it considering that it's a license whitewashing machine.

I don't think I will be paying for it, I don't think the results are worth it.

kjr, to llm
@kjr@babka.social avatar

OK... a nice list of LLMs...
No really correct, models listed as open source are not really open source with a couple of exceptions, they are open weight, but the code to produce them and the training materials are not public.
Especially funny is how Mistral AI sometimes reacts again the claims that they are open source (and sometimes even in an aggressive way)

"The best large language models (LLMs) in 2024"

https://zapier.com/blog/best-llm/

jake4480, (edited ) to tumblr
@jake4480@c.im avatar

Yeah, you're really gonna see which companies are just gonna allow the AI to scrape all their stuff now. I'm a copyleft/creative commons kinda guy. But if you have art that you don't want stolen, the answer is simple.

MAKE YOUR OWN WEBSITE and put your art there (edit: and use that Glaze type of stuff on your art that wrecks AI, just to be sure)! Neocities is SO easy to set up! Or your own domain and hosting via porkbun, GoDaddy (non-WordPress) - anything at all other than proprietary/walled stuff!

tayledras, to ai
@tayledras@mastodon.social avatar

I completely agree with this.

I've seen several respected luminaries argue that LLM is not "true AI" or "Strong AI" since it's based on large learning sets and predictive behavior. They argue that humans and animals are not taught on such large language models or data sets.

What are education and experience, if not Large Learning Models based on the teaching of schools, universities, and books?

https://www.freethink.com/robots-ai/embodied-ai

kjr, to llm
@kjr@babka.social avatar

It is difficult to understand how Meta, a company who handles multilingual big data, uses almost only English data to train Llama 2. Only a 2% of non-English data and an 8.3% of language unknown or non language data (such as code).
Even for self-consume inside of the company it doesn't address their necessities.

Meta Warns Its Latest Large Language Model ‘May Not Be Suitable’ for Non-English Use

https://slator.com/meta-warns-large-language-model-may-not-be-suitable-non-english-use/

ai6yr, to ai
upol, (edited ) to ai
@upol@hci.social avatar

🎯 Explainable AI suffers from an epidemic. I call it Explainability Washing.

💡Think of it as window dressing—techniques, tools, or processes created to provide the illusion of explainability but not delivering it

Let’s use this hyped example from OpenAI. Title is sensational-- Language models can explain neurons in language models.

But is that the case? Let's dig in. 👇

https://openai.com/research/language-models-can-explain-neurons-in-language-models


1/n

troed, to llm
@troed@ioc.exchange avatar

I see lots of posts here on Mastodon where people state that today's "AI" (LLMs) have no use, waste energy and are just doing copyright infringement on a vast scale.

I don't get it.

I just put together "summarize.sh" - a bit of glue between some open source and self-hosted LLMs. It takes a Youtube URL as its only parameter, and outputs a summary in text of the important parts of the spoken words in the video.

That is, I run yt-dlp, Whisper and finally Mixtral 8x7b. And I no longer need to sit through someone yapping about for a few minutes to tell me what should've been a short blog post.

Example output from a 4 minute video:

"The text describes a video tutorial on how to reset a Corsair keyboard when it's not working properly. The keyboard in question has three white flashing lights at the top and is experiencing issues with its RGB lighting and key input. To reset the keyboard, the user should unplug the USB cables from the computer, hold down the escape key, and then plug the USB cables back into the computer while still holding down the escape key. After releasing the escape key, the keyboard's lights should flash, indicating that it has been reset. The tutorial notes that this method has worked for other Corsair keyboards as well."

How is this not a great thing to have?

ai6yr, to ai

There's absolutely no way this won't go wrong, somehow. Full speed ahead! /s

ianRobinson, to llm
@ianRobinson@mastodon.social avatar

Google Bard now supports image upload and can describe the images. The Alt text in the attached was generated from Google Bard via a “describe this image" tab. Of course, the stuff about what French toast and Coca-Cola are I'd remove from the description for real-world use. And fix the errors - white tablecloth is wrong.

This functionality will be useful.

matthewskelton, to ai
@matthewskelton@mastodon.social avatar

🔮 Prediction: "small" language models with what we might call a "micro corpus" of text will be more useful to most organisations than a general purpose LLM like ChatGPT.

GenAI models trained on a small, focused body of materials so they are highly domain specific, all provided as a service, with the model private to each organisation.

kristenhg, to ai
@kristenhg@mastodon.social avatar

One of my former (and very long-term) freelance gigs, How Stuff Works, has replaced writers with ChatGPT-generated content and also laid off its excellent editorial staff.

It seems that going forward, when articles I wrote are updated by ChatGPT, my byline will still appear at the top of the article with a note at the bottom of the article saying that AI was used. So it will look as if I wrote the article using AI.

To be clear: I did not write articles using ChatGPT.

davidshq, to Quotes
@davidshq@hachyderm.io avatar

PKD writing presciently about our current information age?

"It is like information theory; it is noise driving out signal. But it is noise posing as signal so you do not even recognize it as noise...If you can float enough disinformation into circulation you will totally abolish everyone’s contact with reality, probably your own included."

  • Philip K. Dick, The Transmigration of Timothy Archer

kellogh, to generativeAI
@kellogh@hachyderm.io avatar

i wish there were some generic solution to DAGs. is a heavily OOP API. it's great if you want to swap component B for component A, but you're pretty much on your own if you want programs to flow differently. e.g. chat is treated entirely different than question-answer, and creating a new flow means getting neck deep in internal langchain APIs. i wish the components were loosely coupled so i could quickly rearrange the flow

bortzmeyer, to llm French
@bortzmeyer@mastodon.gougere.fr avatar

Je suis en train de télécharger un grand modèle de langage, vous croyez ?

schizanon, (edited ) to ai
@schizanon@mas.to avatar

's new tool allows you to create a new website using and have it automatically deployed to .

"Simply describe the website you want to build not only will author the markup scripts and style sheets but it will deploy it to Netlify I for you too!"

https://www.netlify.com/blog/introducing-netlify-drop-chatgpt-plugin/

BenjaminHan, to llm
@BenjaminHan@sigmoid.social avatar

1/

If a powerful #LLM is told that “Daphne Barrington is the director of A Journey Through Time”, it would surely be able to answer the question “Who is the director of A Journey Through Time?”, right? Well, according to a recent paper [1], not quite (screenshot).

#NLProc #NLP #KnowledgeGraph #Reasoning #Papers

TNLNYC, to ChatGPT
@TNLNYC@mastodon.social avatar

Hofstadter is right:

https://www.theatlantic.com/ideas/archive/2023/07/godel-escher-bach-geb-ai/674589/

Current AI is just autocomplete on steroids.

and other models do not generate anything other than a guess at what the next word in a paragraph ought to be. And no original ideas.

gael, to random French
@gael@mastodon.social avatar

From regulation to Pizzas 🍕: the agent experiment 🤖

I did a quick experiment having chat with itself as Alice and Bob... 👧👦

At each step, agent Alice answers agent Bob (and vice versa) along with the reinjection of recent context.

I made the discussion start by questionning about the need to regulate AI Large Language Models.

Video excerpt below + thread 👇

video/mp4

Ruth_Mottram, to llm
@Ruth_Mottram@fediscience.org avatar

Thoughtful thread from @ct_bergstom on how Google is getting worse due in large part to generated content.

I'm going to again promote here (ht @anderspuck ). I've been using it several months. It's brilliant, no ads, no tracking. But you pay a small monthly subscription fee. I do not know how they plan to deal with the pollution of https://fediscience.org/@ct_bergstrom/111197923541993424discience.org/@ct_bergstrom/111197923541993424](https://fediscience.org/@ct_bergstrom/111197923541993424)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • magazineikmin
  • InstantRegret
  • Durango
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • mdbf
  • khanakhh
  • ethstaker
  • JUstTest
  • everett
  • GTA5RPClips
  • Leos
  • cubers
  • ngwrru68w68
  • osvaldo12
  • tester
  • tacticalgear
  • modclub
  • anitta
  • normalnudes
  • provamag3
  • lostlight
  • All magazines