ai6yr, to ai
metin, (edited ) to ai
@metin@graphics.social avatar

So… Big Tech is allowed to blatantly steal the work, styles and therewith the job opportunities of thousands of artists and writers without being reprimanded, but it takes similarity to the voice of a famous actor to spark public outrage about AI. 🤔

https://www.theregister.com/2024/05/21/scarlett_johansson_openai_accusation/

WanderingInDigitalWorlds, to ubuntu
@WanderingInDigitalWorlds@mstdn.games avatar

Reading about Ubuntu and nvidia’s LLM development collaboration, it seems like none of the features will be forced on end users via software updates. It seems like an opt-in situation, for which I’m thankful. As Microsoft and other companies are going about LLM integration wrong. Forcing users to test unsafe software is a horrible strategy.

https://ubuntu.com/nvidia

#Ubuntu #LLM

ianRobinson, to OpenAI
@ianRobinson@mastodon.social avatar

OpenAI seems to be in a bubble where they think they can do what they want without consequences. They had been trying to get Scarlet Johansson to agree to voice a ChatGPT bot for a year. She said no. They used a very similar voice anyway. Cue lawyers and weaselly backtracking. A must-read from Casey Newton. https://www.platformer.news/open-ai-scarlett-johansson-her-voice-sam-altman/

alexanderhay, to microsoft
@alexanderhay@mastodon.social avatar
Taffer, to ai
@Taffer@mastodon.gamedev.place avatar

In my mind, the people most likely to use "AI" for things are the ones who sort of know what they want, but don't know how to get it.

So you ask for code to do something, and the LLM spits out something glommed together from Stack Overflow posts or Reddit. How do you know it does what you wanted? How do you debug it if it doesn't work?

ramikrispin, to machinelearning
@ramikrispin@mstdn.social avatar

MLX Examples 🚀

The MLX is Apple's framework for machine learning applications on Apple silicon. The MLX examples repository provides a set of examples for using the MLX framework. This includes examples of:
✅ Text models such as transformer, Llama, Mistral, and Phi-2 models
✅ Image models such as Stable Diffusion
✅ Audio and speech recognition with OpenAI's Whisper
✅ Support for some Hugging Face models

🔗 https://github.com/ml-explore/mlx-examples

SirTapTap, to random
@SirTapTap@mastodon.social avatar

I'm sure they're so glad they paid 6-8 figures for Reddit data

niclake, to ai
@niclake@mastodon.social avatar

I'd been writing a post for talking about some of the more comical fuck-ups all of these and have been spewing. And now I'm fucking furious.

Note: content warning for depression, self-harm, and suicide

https://niclake.me/ai

kubikpixel, to gentoo
@kubikpixel@chaos.social avatar

Gentoo and NetBSD ban 'AI' code, but Debian doesn't – yet

The problem isn't just that LLM-bot generated code is bad – it's where it came from.

🐧 https://www.theregister.com/2024/05/18/distros_ai_code/


troed, to llm
@troed@ioc.exchange avatar

I'm worried for my tech friends.

The vitriol, and - honestly - ignorance around LLM-based "AI" is starting to fill my feeds from normally sane and technologically literate people.

You should be able to see through the hype and misuse. LLMs aren't encyclopedias - they're tools that are able to manipulate data of various sorts in ways that are very similar to how humans do it.

Yes, I compare LLMs to human brains. It's not the same as saying they're conscious (yet) - but the way LLMs work is apparently in many ways similar to how our brains work.

One fascinating insight into that comes from research done on what happens to the ability of LLMs to recall information as they are exposed to large and larger corpuses. Apparently they're better at recalling the early and late information, whilst starting to lose some in the middle.

In human psychology we call that the primacy and recency effect - because our brains do the same.

LLMs are absolutely awesome for a wide variety of tasks (and we have by no means found them all). Every second you spend not understanding this is a second on the way to your own irrelevance (if these tools would aid someone in your chosen area of work) or to become a grumpy old person yelling at clouds.

cheukting_ho, to llm
@cheukting_ho@fosstodon.org avatar

opening keynote by @t_redactyl - and illusions

hypolite, to llm

How would anyone trust the products these people put worth? They aren’t working on making LLMs more accurate (spoiler alert: they can’t, by design), they’re working to make them more appealing to companies targeting unsuspecting consumers. By any means necessary.


RE: mastodon.social/users/nixCraft…

grumpybozo, to llm
@grumpybozo@toad.social avatar

I’d like to trust this story, but it fails to link to its supposed source or provide enough info to find it elsewise. A few clicks around the site makes me think that it may well be nothing but a -composed content farm. https://cosocial.ca/@kgw/112498693958537559

pixelate, to accessibility
@pixelate@tweesecake.social avatar

So, I know generative AI is supposed to be just the most incorrect thing ever, but I want you to compare two descriptions. "A rock on a beach under a dark sky." And: The image shows a close-up view of a rocky, cratered surface, likely a planet or moon, with a small, irregularly shaped moon or asteroid in the foreground. The larger surface appears to be Mars, given its reddish-brown color and texture. The smaller object, which is gray and heavily cratered, is likely one of Mars' moons, possibly Phobos or Deimos. The background fades into the darkness of space. The first one is supposed to be the pure best thing that isn't AI. Right? Like, it's what we've been using for the past like 5 years. And yes, it's probably improved over those years. This is Apple's image description. It's, in my opinion, the best, most clear, and sounds like the ALT-text that it's made from, which people made BTW, and the images it was made with, which had to come from somewhere, were of very high quality, unlike Facebook and Google which just plopped anything and everything into theirs. The second was from Be My Eyes. Now, which one was more correct? Obviously, Be My Eyes. Granted, it's not always going to be, but goodness just because some image classification tech is old, doesn't mean it's better. And just because Google and Facebook call their image description bullshit AI, doesn't mean it's a large language model. Because at this point in time, Google TalkBack does not use Gemini, but uses the same thing VoiceOver has. And Facebook uses that too, just a classifier. Now, should sighted people be describing their pictures? Of course. Always. With care. And having their stupid bots use something better than "picture of cats." Because even a dumb image classifier can tell me that, and probably a bit more, lol. Cats sleeping on a blanket. Cats drinking water from a bowl. Stuff like that. But for something quick, easy, and that doesn't rely on other people, shoot yeah I'll put it through Be My Eyes. #accessibility #AI #LLM #BeMyEyes #blind

SirTapTap, to ai
@SirTapTap@mastodon.social avatar

well if you needed another reason to not use ChatGPT, it is now sucking in 100% of Rupert Murdoch's propaganda

look forward to Chat GPT telling you the election was stolen and only ivermectin mixed with aquarium cleaner can fix it

dvshkn, to llm
@dvshkn@fosstodon.org avatar

I gave the pizza question to Golden Gate Claude. This is truly the people's LLM. Google btfo.

chikim, to llm
@chikim@mastodon.social avatar
kornel, to llm
@kornel@mastodon.social avatar

There's an economic curse on Large Language Models — the crappiest ones will be the most widely used ones.

The highest-quality models are exponentially more expensive to run, and currently are too slow for instant answers or processing large amounts of data.

Only the older/smaller/cut-down models are cheap enough to run at scale, so the biggest deployments are also the sloppiest ones.

vitriolix, to llm
@vitriolix@mastodon.social avatar

Highlighted comment ftw

qurlyjoe, to llm
@qurlyjoe@mstdn.social avatar

An with all the smoke and mirrors of statistical and analytic processing of the best of them, but based solely on the complete corpus of Shakespeare’s writings.
Ask any question and its reply will be in his words.

ianRobinson, to apple
@ianRobinson@mastodon.social avatar

Does anyone know why Anthropic isn’t in the conversation about Apple doing a deal with an LLM provider?

cheukting_ho, to llm
@cheukting_ho@fosstodon.org avatar

Has been waiting for this for the whole by @vboykis and don’t worry about

paulox, (edited ) to italia
@paulox@fosstodon.org avatar

Closing the third day of PyCon Italia 2024, Vicki Boykis will present her keynote, "Stay Close to the Metal" 🚀

CC @pycon @vicki

Follow the live stream now 👇 https://2024.pycon.it/en/keynotes/stay-close-to-the-metal

schwinghamer, to climate
@schwinghamer@mstdn.social avatar

Hello Mastodon, I know that a lot of you discuss the high environmental cost (such as energy use and water use) of AI and I hope that some of you could reply with authoritative publications/links regarding this problem! I want to try to convince an environmental science colleague

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • normalnudes
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • osvaldo12
  • ngwrru68w68
  • ethstaker
  • JUstTest
  • everett
  • Durango
  • Leos
  • cubers
  • mdbf
  • khanakhh
  • tester
  • modclub
  • cisconetworking
  • anitta
  • tacticalgear
  • provamag3
  • lostlight
  • All magazines