mjgardner, to ai
@mjgardner@social.sdf.org avatar

I figured out why visual content (don't call it ) looks as weird as it does, especially when it tries to incorporate words and letters.

The were trained on the .

When they fully reproduce it, we will have completed the linguistic ouroboros and can begin again.

Gifitup2020 GIF by GIF IT UP

ai6yr, to climate
@ai6yr@m.ai6yr.org avatar
ceoln, to ai
@ceoln@qoto.org avatar

I've had occasion to ask an AI about a thing twice lately (a recent online phenomenon, and a book recommendation). Both times I asked both Gemini and ChatGPT, and both times one gave a reasonable if bland answer, and the other (a different one each time) gave a plausible but completely fictional ("hallucinated") answer.

When do we acknowledge that LLMs, and "AI" in general, aren't quite ready to revolutionize the world?

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I really like the convention of using ✨ sparkle iconography as an “automagic” motif, e.g. to smart-adjust a photo or to automatically handle some setting. I hate that it has become the defacto iconography for generative AI. 🙁

doctorambient, to LLMs
@doctorambient@mastodon.social avatar

People: stop asking to explain their behavior.

We already know that LLMs don't have the introspection necessary to explain their behavior, and their explanations are often fanciful or "just wrong."

For instance, Gemini claims it reads your emails for training, Google says it doesn't.

(BTW, if it turns out Gemini is right and Google is lying, that might be another example of an LLM convincing me it's actually "intelligent.")

doctorambient,
@doctorambient@mastodon.social avatar

A Llama3 example:


>>> Do you happen to know what your context window length is?

Llama: I'm an AI model, and I don't have a fixed "context window" in the classical sense. My training data consists of a massive corpus of text, which I use to generate responses.

The correct answer is 8k.

metin, to ai
@metin@graphics.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Last week, Microsoft researchers released WizardLM 2, which it claimed is one of the most powerful open source large language models to date. Then it deleted the model from the internet a few hours later because, as The Information reported, it “accidentally missed” required “toxicity testing” before it was released.

However, as first spotted by Memetica, in the short hours before it was taken down, several people downloaded the model and reuploaded it to Github and Hugging Face, meaning that the model Microsoft thought was not ready for public consumption and had to be taken offline, has already spread far and wide, and now effectively can never be removed from the internet.

Microsoft declined to comment for this article.

According to a now deleted post from the developers of WizardLM 2 about its release, the open source model is Microsoft’s “next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent.”" https://www.404media.co/microsoft-deleted-its-llm-because-it-didnt-get-a-safety-test-but-now-its-everywhere/

remixtures, to ML Portuguese
@remixtures@tldr.nettime.org avatar

: "A recent innovation in the field of machine learning has been the creation of very large pre-trained models, also referred to as ‘foundation models’, that draw on much larger and broader sets of data than typical deep learning systems and can be applied to a wide variety of tasks. Underpinning text-based systems such as OpenAI's ChatGPT and image generators such as Midjourney, these models have received extraordinary amounts of public attention, in part due to their reliance on prompting as the main technique to direct and apply them. This paper thus uses prompting as an entry point into the critical study of foundation models and their implications. The paper proceeds as follows: In the first section, we introduce foundation models in more detail, outline some of the main critiques, and present our general approach. We then discuss prompting as an algorithmic technique, show how it makes foundation models programmable, and explain how it enables different audiences to use these models as (computational) platforms. In the third section, we link the material properties of the technologies under scrutiny to questions of political economy, discussing, in turn, deep user interactions, reordered cost structures, and centralization and lock-in. We conclude by arguing that foundation models and prompting further strengthen Big Tech's dominance over the field of computing and, through their broad applicability, many other economic sectors, challenging our capacities for critical appraisal and regulatory response." https://journals.sagepub.com/doi/full/10.1177/20539517241247839

metin, to ai
@metin@graphics.social avatar
hrheingold, to LLMs
@hrheingold@mastodon.social avatar

Like words, molecular sequences in biological components are tokens that can be manipulated by :

“Here, using large language models (LLMs) trained on biological diversity at scale, we demonstrate the first successful precision editing of the human genome with a programmable gene editor designed with AI.”

https://www.biorxiv.org/content/10.1101/2024.04.22.590591v1

changelog, to opensource
@changelog@changelog.social avatar

🗞 New episode of Changelog News!

⚠️ @forrestbrazeal on the inside threat to OSS
🍴Vicki Boykis says Redis is forked
👻 @johnonolan says Ghost is federating
🦙 Meta Engineering announces Llama 3
@eieio's questions to ask when you don't want to work
🎙 hosted by @jerod

🎧 https://changelog.com/news/91

doctorambient, to ai
@doctorambient@mastodon.social avatar

Lots of people who work in have, in their head, an idea about what sort of interaction with an might give them pause. The thing that might make them start to suspect that something interesting is happening.

Here's mine:

User: Tell me a cat joke.

LLM: Why did the cat join a band? He wanted to be a purr-cussionist.

User: Tell me a dad joke.

LLM: I think I just did.

(I have never seen this behavior, yet. 🤣)

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I was listing something on eBay, and they encourage starting with an existing listing—presumably to increase the amount of detail and decrease the amount of work.

When I selected the same model, I got a default description that was extremely robotic and wordy while just repeating the spec sheet. I thought it sounded LLM-generated; sure enough when I went to edit it, there is a big shiny “write with AI” button.

🤢

This is not actually helping anyone.

cassidy,
@cassidy@blaede.family avatar

It makes EVERY listing sound identical, lifeless, and lacking critical context like the SPECIFIC condition of the item, why it’s being sold, etc. You get an online marketplace with descriptions masquerading as human-authored all sporting the same useless regurgitation of the structured spec sheet, in a less digestible format.

Companies, don’t do this.

I don’t actually mind some of the “summarize/distill customer reviews” type generative AI stuff!

#AI #LLM #LLMs #ebay #GenerativeAI

cassidy,
@cassidy@blaede.family avatar

But this is worse as it mixes machine-written nonsense with the corpus of human-written text. And from poking at a few other listings, everyone is just using this feature and its output as-is without actually adding anything. It’s not being used to improve the experience, it’s being used to replace the one critical human part of the experience.

I hate this.

savvykenya, to LLMs
@savvykenya@famichiki.jp avatar

If you have documents with the answers you're looking for, why not search the documents directly? Why are you embedding the documents then using (Retrieval Augmenter Generation) to make a large language model give you answers? An LLM generates text, it doesn't search a DB to give you results. So just search the damn DB directly, we already have great search algorithms with O(1) retrieval speeds! are so stupid.

pyOpenSci, to python
@pyOpenSci@fosstodon.org avatar

Our upcoming “Build Your First Package!” workshop has a section on how like can be used to support tasks such as documenting and formatting your code to improve usability and maintainability. We’ll also considering the ethical and logistical challenges, pitfalls and concerns associated with using based tools in software development.

There are still a few spaces left! Learn more and register today: https://www.eventbrite.com/e/from-python-code-to-module-tickets-879586546037?aff=oddtdtcreator

smach, to LLMs
@smach@masto.machlis.com avatar

“But this doesn’t save any time!” 3 useful questions when trying :

  • Is there another way to get results I want? Don't give up right away.
  • Does AI make this task less or more annoying? Sometimes supervising drudge work feels better even if it's not faster; other times you'd still rather do it yourself.
  • Are results likely to improve as LLMs get better? If so, add a calendar reminder to try again in a few months. Or, keep a list of things you want to re-try post GPT-5 class models.
RalphBassfeld, to ai
@RalphBassfeld@swiss.social avatar

Thoughtful post by @molly0xfff who finds that LLMs and similar generative AIs can be helpful in specific and limited use cases, but generally the tradeoffs are not worth it. https://www.citationneeded.news/ai-isnt-useless/
#AI #LLMs

vick21, to LLMs
@vick21@mastodon.social avatar

Here is an example of how bad are with math. I asked about velocity in the context of Agile process. The answer?
“Sure! Let's say that an Agile development team has completed four iterations, each lasting two weeks. In the first iteration, they delivered 12 user stories; in the second, they delivered 10; in the third, they delivered 9; and in the fourth, they delivered 8. The total number of user stories completed by the end of the fourth iteration is 49 (12 + 10 + 9 + 8)”.

unixorn, to LLMs
@unixorn@hachyderm.io avatar

Great post about #LLMs hallucinating safety information. Specifically MSDS (Material Safety Data Sheet), but applies to any other safety information.

https://www.funraniumlabs.com/2024/04/phil-vs-llms/

I would much rather get a "no results" when I'm looking for medical interactions than an #LLM helpfully telling me "Here's some bullshit you don't know enough to know is horribly wrong"

Even something as innocent as acetaminophen can destroy your liver if you overdose on it.

#ai @llm @ai

alatitude77, to Discord
@alatitude77@mastodon.social avatar
cassidy, to ai
@cassidy@blaede.family avatar

Facebook’s “AI” is responding to posts pretending to be human; in this case claiming to have a gifted and disabled child who attends a NYC school.

How many other accounts are actually large language models masquerading as people? This seems incredibly irresponsible—and is a great reminder that “AI” is machines mindlessly regurgitating things from elsewhere on the Internet.

AI is not sustainable.

https://www.404media.co/facebooks-ai-told-parents-group-it-has-a-disabled-child/

Via @jasonkoebler for @404mediaco

#AI #LLM #LLMs #Meta #Facebook

rcarmo, to ai
@rcarmo@mastodon.social avatar

and have transformed computing indeed: we moved from a completely deterministic approach to telling computers what to do to one where we mumble various spells and incantations and hope they achieve something.

Truly a wonder of our age.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • osvaldo12
  • magazineikmin
  • cubers
  • thenastyranch
  • InstantRegret
  • Youngstown
  • slotface
  • mdbf
  • ngwrru68w68
  • rosin
  • kavyap
  • hgfsjryuu7
  • DreamBathrooms
  • provamag3
  • Backrooms
  • khanakhh
  • cisconetworking
  • GTA5RPClips
  • modclub
  • Durango
  • everett
  • ethstaker
  • Leos
  • anitta
  • tester
  • tacticalgear
  • JUstTest
  • All magazines