In my mind, the people most likely to use "AI" for things are the ones who sort of know what they want, but don't know how to get it.
So you ask for code to do something, and the LLM spits out something glommed together from Stack Overflow posts or Reddit. How do you know it does what you wanted? How do you debug it if it doesn't work?
So, I know generative AI is supposed to be just the most incorrect thing ever, but I want you to compare two descriptions. "A rock on a beach under a dark sky." And: The image shows a close-up view of a rocky, cratered surface, likely a planet or moon, with a small, irregularly shaped moon or asteroid in the foreground. The larger surface appears to be Mars, given its reddish-brown color and texture. The smaller object, which is gray and heavily cratered, is likely one of Mars' moons, possibly Phobos or Deimos. The background fades into the darkness of space. The first one is supposed to be the pure best thing that isn't AI. Right? Like, it's what we've been using for the past like 5 years. And yes, it's probably improved over those years. This is Apple's image description. It's, in my opinion, the best, most clear, and sounds like the ALT-text that it's made from, which people made BTW, and the images it was made with, which had to come from somewhere, were of very high quality, unlike Facebook and Google which just plopped anything and everything into theirs. The second was from Be My Eyes. Now, which one was more correct? Obviously, Be My Eyes. Granted, it's not always going to be, but goodness just because some image classification tech is old, doesn't mean it's better. And just because Google and Facebook call their image description bullshit AI, doesn't mean it's a large language model. Because at this point in time, Google TalkBack does not use Gemini, but uses the same thing VoiceOver has. And Facebook uses that too, just a classifier. Now, should sighted people be describing their pictures? Of course. Always. With care. And having their stupid bots use something better than "picture of cats." Because even a dumb image classifier can tell me that, and probably a bit more, lol. Cats sleeping on a blanket. Cats drinking water from a bowl. Stuff like that. But for something quick, easy, and that doesn't rely on other people, shoot yeah I'll put it through Be My Eyes. #accessibility#AI#LLM#BeMyEyes#blind
So… Big Tech is allowed to blatantly steal the work, styles and therewith the job opportunities of thousands of artists and writers without being reprimanded, but it takes similarity to the voice of a famous actor to spark public outrage about AI. 🤔
OpenAI seems to be in a bubble where they think they can do what they want without consequences. They had been trying to get Scarlet Johansson to agree to voice a ChatGPT bot for a year. She said no. They used a very similar voice anyway. Cue lawyers and weaselly backtracking. A must-read from Casey Newton. #OpenAI#LLMhttps://www.platformer.news/open-ai-scarlett-johansson-her-voice-sam-altman/
#OpenAI doing the thing they are building tools to facilitate everyone to do (imitate the likeness & creative output of real people)... is exactly the signal we should all be recognising it to be.
We've all been told that our ability to grind out work-value is what we exist for... but even that poor measure of our worth is fair game to those big enough to steal it.
Just realized that Microsoft Recall is totally going to be sold as an enterprise product for spying on employees. MS has been looking high and low for a way to sell enterprises on AI subscriptions and this is it. They cracked the code. Fuck.
How would anyone trust the products these people put worth? They aren’t working on making LLMs more accurate (spoiler alert: they can’t, by design), they’re working to make them more appealing to companies targeting unsuspecting consumers. By any means necessary.
The vitriol, and - honestly - ignorance around LLM-based "AI" is starting to fill my feeds from normally sane and technologically literate people.
You should be able to see through the hype and misuse. LLMs aren't encyclopedias - they're tools that are able to manipulate data of various sorts in ways that are very similar to how humans do it.
Yes, I compare LLMs to human brains. It's not the same as saying they're conscious (yet) - but the way LLMs work is apparently in many ways similar to how our brains work.
One fascinating insight into that comes from research done on what happens to the ability of LLMs to recall information as they are exposed to large and larger corpuses. Apparently they're better at recalling the early and late information, whilst starting to lose some in the middle.
In human psychology we call that the primacy and recency effect - because our brains do the same.
LLMs are absolutely awesome for a wide variety of tasks (and we have by no means found them all). Every second you spend not understanding this is a second on the way to your own irrelevance (if these tools would aid someone in your chosen area of work) or to become a grumpy old person yelling at clouds.
I guess making money somewhat honestly by having customers that actually pay for a service with at least some guarantees of privacy and safety is not as lucrative as having an open platform network where people are tricked into giving out all their data while they are spied upon for whatever reasons.
I just finished a productive Copilot session on a complex programming task. I came up with much of the algorithms, and wrote a lot of the code, and had to guide it a lot throughout, but credit where due, Copilot did make small but meaningful contributions along the way.
Overall, not a pair programmer but someone useful to talk to when WFH alone on complex tasks.
Enough for Copilot to earn a ✋🏽. And I like how it responded to that. It has got that part down. 😉
The MLX is Apple's framework for machine learning applications on Apple silicon. The MLX examples repository provides a set of examples for using the MLX framework. This includes examples of:
✅ Text models such as transformer, Llama, Mistral, and Phi-2 models
✅ Image models such as Stable Diffusion
✅ Audio and speech recognition with OpenAI's Whisper
✅ Support for some Hugging Face models
When you choose to use an #ML#LLM foundation model, you accept the risk management decisions made by the vendor without your input. Wonder what they are? Read this #MLsec paper from #IEEE computer.
•This• is the compelling #LLM use case for me. If I use a translator to write messages in French I'm not forced to come up with an initial attempt and I lose the learning aspect of that.
If instead I put something into ChatGPT and it not only corrects but explains what my mistakes were that's a huge win in terms of learning from your mistakes.
(I still don't trust the thing 100% but it's also not a high stakes situation – I'm not engaging in a nuclear arms treaty after all 😅)
Fix your shitty autocorrect! There’s no such thing at “there’re” so quit putting it into my content.
And how come I get a word suggestion as I type, I click on it, and an entirely different word is inserted that wasn’t even one of the options offered - sometimes not even an English word?!
Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.
If you pick up one of the #Nvidia Orin boards, definitely get an SSD to go along with it. While it can run off an SD card, you’re going to run out of space quickly, and you’ll see a performance hit on complex tasks (like running a local #LLM). #EdgeAI#ai
To those concerned about #slack now using your chats, including trade secrets, NDA stuff, etc., to train their #llm: #WTF did you expect using a third party with full content access to discuss those things? That they'd be gentlemen and not read your mail? That they somehow wouldn't try to find a way to monetize that juicy data? I am flabbergasted that people working for corporations just as immoral could have been that naive...