marcel, to ai
@marcel@waldvogel.family avatar

Modern text generators create randomized output with no prior planning. They resist to be quality-checked by tools and processes established in the software industry.

Given this, the results are amazing. However, companies are selling the idea that these assistants will do quality checking themselves soon™.

This is mass delusion. But hey, the perks for managers/investors are worthwhile 🤷.


https://www.theverge.com/2024/5/24/24164119/google-ai-overview-mistakes-search-race-openai

anmey, to llm
@anmey@social.anoxinon.de avatar

I think one of the biggest fears people have about AI is that it isn't perfect as assumed, but that, like us humans, it takes the given information, assumes the most likely outcome, and presents it plausibly.

preslavrachev, to ai
@preslavrachev@mastodon.social avatar

C’mon, we’ve all been using LLMs on a daily basis for more than year now. You’d think people would slowly start to get the idea. But no.

Why is everyone still using an LLM and looking for accurate answers? They are obviously not built for that.

You use an LLM to get yourself off your writer’s block, to get a few ideas how to move on, not for it to solve the end equation for you. Duh!


schwinghamer, to climate
@schwinghamer@mstdn.social avatar

Hello Mastodon, I know that a lot of you discuss the high environmental cost (such as energy use and water use) of AI and I hope that some of you could reply with authoritative publications/links regarding this problem! I want to try to convince an environmental science colleague

grumpybozo, to llm
@grumpybozo@toad.social avatar

I’d like to trust this story, but it fails to link to its supposed source or provide enough info to find it elsewise. A few clicks around the site makes me think that it may well be nothing but a -composed content farm. https://cosocial.ca/@kgw/112498693958537559

niclake, to ai
@niclake@mastodon.social avatar

I'd been writing a post for talking about some of the more comical fuck-ups all of these and have been spewing. And now I'm fucking furious.

Note: content warning for depression, self-harm, and suicide

https://niclake.me/ai

The_Tim, to ai
@The_Tim@mastodon.social avatar

"Drunk Clippy" is my new name for all LLM features like Google's "AI Overview" and OpenAI's ChatGPT. #AI #LLM #Google #OpenAI https://www.bbc.com/news/articles/cd11gzejgz4o

SirTapTap, to random
@SirTapTap@mastodon.social avatar

I'm sure they're so glad they paid 6-8 figures for Reddit data

gtbarry, to journalism
@gtbarry@mastodon.social avatar

Sam Altman's OpenAI signs content agreement with News Corp

OpenAI has signed a deal that will give it access to content from some of the biggest news publications owned by media conglomerate News Corp

https://www.reuters.com/technology/sam-altmans-openai-signs-content-agreement-with-news-corp-2024-05-22/

dvshkn, to llm
@dvshkn@fosstodon.org avatar

I gave the pizza question to Golden Gate Claude. This is truly the people's LLM. Google btfo.

alxd, to writing
@alxd@writing.exchange avatar

Given how many contests, anthologies and magazines are currently struggling with a flood of / -generated spam, have you heard about anyone trying to fight the problem by asking specifically for stories which AI cannot easily write?

Even the best models I've tried cannot easily use themes, symbols and structures - they always come out unnatural.

Are there any specific limitations, formats that can work similarly?

paulox, to ai
@paulox@fosstodon.org avatar

At PyCon Italia 2024 Ines Montani is presenting her talk "The AI Revolution Will Not Be Monopolized: How open-source beats economies of scale, even for LLMs" 🐍

CC @pycon @ines

Follow it live now 👇
https://2024.pycon.it/en/event/the-ai-revolution-will-not-be-monopolized-how-open-source-beats-economies-of-scale-even-for-llms

Presentation of Ines talk
Ines giving her talk
O es with a Spacy slide

chikim, to llm
@chikim@mastodon.social avatar

Llama.cpp now supports the distributed inference, meaning you can use multiple computers to speed up the response time! Network is the main bottleneck, so all machines need to be hard wired, not connected through wifi. ##LLm #AI #ML https://github.com/ggerganov/llama.cpp/tree/master/examples/rpc

pseudonym, to ai
@pseudonym@mastodon.online avatar

More shenanigans.

https://nitter.poast.org/petergyang/status/1793480607198323196#m

Link to X/Twitter replaced with nitter link.

Tl;Dr AI suggested adding glue to pizza to make the cheese stick. Sourced from 11 year old reddit post.

These are all good fun to mock until someone actually gets hurt taking these responses literally.

I'm torn. I've thrown my share of shade at s and the rush to shove "AI" into everything, and even what they do well, one can argue if it's worth the cost.

But I've seen some useful stuff too.

SirTapTap, to ai
@SirTapTap@mastodon.social avatar

well if you needed another reason to not use ChatGPT, it is now sucking in 100% of Rupert Murdoch's propaganda

look forward to Chat GPT telling you the election was stolen and only ivermectin mixed with aquarium cleaner can fix it

cheukting_ho, to llm
@cheukting_ho@fosstodon.org avatar

opening keynote by @t_redactyl - and illusions

Taffer, to ai
@Taffer@mastodon.gamedev.place avatar

In my mind, the people most likely to use "AI" for things are the ones who sort of know what they want, but don't know how to get it.

So you ask for code to do something, and the LLM spits out something glommed together from Stack Overflow posts or Reddit. How do you know it does what you wanted? How do you debug it if it doesn't work?

ai6yr, to ai
WanderingInDigitalWorlds, to ubuntu
@WanderingInDigitalWorlds@mstdn.games avatar

Reading about Ubuntu and nvidia’s LLM development collaboration, it seems like none of the features will be forced on end users via software updates. It seems like an opt-in situation, for which I’m thankful. As Microsoft and other companies are going about LLM integration wrong. Forcing users to test unsafe software is a horrible strategy.

https://ubuntu.com/nvidia

#Ubuntu #LLM

pixelate, to accessibility
@pixelate@tweesecake.social avatar

So, I know generative AI is supposed to be just the most incorrect thing ever, but I want you to compare two descriptions. "A rock on a beach under a dark sky." And: The image shows a close-up view of a rocky, cratered surface, likely a planet or moon, with a small, irregularly shaped moon or asteroid in the foreground. The larger surface appears to be Mars, given its reddish-brown color and texture. The smaller object, which is gray and heavily cratered, is likely one of Mars' moons, possibly Phobos or Deimos. The background fades into the darkness of space. The first one is supposed to be the pure best thing that isn't AI. Right? Like, it's what we've been using for the past like 5 years. And yes, it's probably improved over those years. This is Apple's image description. It's, in my opinion, the best, most clear, and sounds like the ALT-text that it's made from, which people made BTW, and the images it was made with, which had to come from somewhere, were of very high quality, unlike Facebook and Google which just plopped anything and everything into theirs. The second was from Be My Eyes. Now, which one was more correct? Obviously, Be My Eyes. Granted, it's not always going to be, but goodness just because some image classification tech is old, doesn't mean it's better. And just because Google and Facebook call their image description bullshit AI, doesn't mean it's a large language model. Because at this point in time, Google TalkBack does not use Gemini, but uses the same thing VoiceOver has. And Facebook uses that too, just a classifier. Now, should sighted people be describing their pictures? Of course. Always. With care. And having their stupid bots use something better than "picture of cats." Because even a dumb image classifier can tell me that, and probably a bit more, lol. Cats sleeping on a blanket. Cats drinking water from a bowl. Stuff like that. But for something quick, easy, and that doesn't rely on other people, shoot yeah I'll put it through Be My Eyes.

alexanderhay, to microsoft
@alexanderhay@mastodon.social avatar
chikim, to llm
@chikim@mastodon.social avatar
metin, (edited ) to ai
@metin@graphics.social avatar

So… Big Tech is allowed to blatantly steal the work, styles and therewith the job opportunities of thousands of artists and writers without being reprimanded, but it takes similarity to the voice of a famous actor to spark public outrage about AI. 🤔

https://www.theregister.com/2024/05/21/scarlett_johansson_openai_accusation/

ianRobinson, to OpenAI
@ianRobinson@mastodon.social avatar

OpenAI seems to be in a bubble where they think they can do what they want without consequences. They had been trying to get Scarlet Johansson to agree to voice a ChatGPT bot for a year. She said no. They used a very similar voice anyway. Cue lawyers and weaselly backtracking. A must-read from Casey Newton. https://www.platformer.news/open-ai-scarlett-johansson-her-voice-sam-altman/

syntaxseed, to OpenAI
@syntaxseed@phpc.social avatar

doing the thing they are building tools to facilitate everyone to do (imitate the likeness & creative output of real people)... is exactly the signal we should all be recognising it to be.

🚩Copyright & licenses don't matter. Creators don't matter. You don't matter.

We've all been told that our ability to grind out work-value is what we exist for... but even that poor measure of our worth is fair game to those big enough to steal it.

https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • InstantRegret
  • GTA5RPClips
  • magazineikmin
  • mdbf
  • rosin
  • Youngstown
  • khanakhh
  • everett
  • slotface
  • thenastyranch
  • osvaldo12
  • DreamBathrooms
  • anitta
  • ngwrru68w68
  • ethstaker
  • modclub
  • cubers
  • cisconetworking
  • Durango
  • tacticalgear
  • tester
  • normalnudes
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines