Researchers in the UK claim to have translated the sound of laptop keystrokes into their corresponding letters with 95 percent accuracy in some cases....
Want to show visitors to your site that your content is human made and doesn't use AI? Grab my badge pack for FREE (or pay as much as you want)
The pack contains 64 88x31px PNG and SVG badges in 8 colors and phrases “made by a human, drawn by a human, human content, written by a human, I am not a robot, never by ai, human content, there's no ai here!”
Mozilla fires 60 people to “focus on bringing ‘trustworthy AI into Firefox.’”
Fuck you, Mozilla. No one is asking for AI in Firefox. Sadly, you’re the best we can hope for under capitalism. So if we want something better, we should look into alternative models.
Whenever you see the words “ads”, “cryptocurrency”, “blockchain”, “web 3”, or “AI”, just replace them with “farts” and you’ll know whether you want them or not.
“Can the fediverse survive without farts?”
Yes, perfectly well.
“Will farts replace people?”
I hope not.
“The European Commission embraces farts.”
That’s unfortunate.
“This new startup wants to improve your life with farts.”
I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and...
AI sparks countless questions – social, ethical, economic, and even culinary!
Our proposed #AI law classifies technologies based on risk:
⚠️ Ban on AI systems that threaten citizens, like social scoring or specific facial recognition.
⚠️ Strict regulations for high-risk AI in healthcare, education, and critical infrastructure.
⚠️ Additional transparency requirements for generative AI such as chatbots and image generators.
The proposal is in its final stage. Agreement is expected by year-end.
One of the biggest problems with the phrase "artificial intelligence" is that decades of criti-hyping sci-fi has endowed it with the meaning "simulated mind". But human technology is no closer to creating that than we were in the 1950s. As AI experts like #DavidChapman tirelessly point out, humans haven't even developed a philosophy of mind accurate enough to tell us what a simulated mind would be simulating.
#Zoom just changed their terms and conditions to include using anyone's video and audio for training #AI with no option for opting out. You too can help train #deepfake s!
Living with a disabled spouse, I used Zoom a lot to get through the ongoing global pandemic.
What alternatives are out there for remote teaching/meetings? #BoycottZoom
The field of #AI is such junk pseudo science at this point. Which other field has its equivalent of Nobel prize winners going absolutely bonkers? Between him and Hinton and Yoshua Bengio (his brother has the complete opposite view at least) clown town is getting crowded.
People are getting paid for #publishing#AI-written #books, in an #author's name, when in fact the author did not, and knew nothing about this happening.
Then, when the author tries to do something about it to stop this obvious #abuse, they are told no.
You know why some blind people are really leaning into AI to fix accessibility issues? No, not like overlays that probably barely have any if/else statements in them, let alone AI, but stuff like Be My Eyes, and gasp screen recognition in VoiceOver for iOS? Because shit sucks, and it's sucked for the last 40 years of computing history for blind people. That's why whenever we get even a bit more light, even if 20% of what an AI says is fake, that 80%, that gives us 80% more info than we didn't have before. And yeah, we should all, every single one of us, know that AI can give false info by now. Hell, Mastodon folks have been shoving that into our ears with an oversized cue tip since the day ChatGPT came out. We get it. But hot damn, being able to point my phone out the bus window and take pictures as I'm going to work, hearing about a fire station, or a house with a dog in the yard, or that it's a sunny, clear, nice day outside even, is really freaking nice. And sure, maybe it's not a firestation. Maybe it's a courthouse, or a post office, or something else. but it's something that I would never have known before. Because I don't have some sighted person telling me about what's around, and I wouldn't want any other human to have to do that for me. Like, this is the thing. In order to get 100%, perfect info, I'd have to hire another human who, all they do is look around and tell me in extreme detail, what's around me? Now, sighted people of Fedi, would you want that job? Maybe for a day. Maybe for a week. But months of that? I doubt it. And that is where AI comes in. No, it ain't perfect. And the more you deviate from its training data, the less accurate it gets. And maybe eventually we'll get to a point in the middle of what VoiceOver Recognition is, and what LLM's are. But I'm just getting tired of this OMG AI is the end of the world rhetoric. It's really getting old.
There's this Google internal document, for example, that points out FLOSS community is close to eating Google's and OpenAI's cake:
ttps://www.semianalysis.com/p/google-we-have-no-moat-and-neither
So here is my question to you:
What are the best examples of useful, small, on-device models already out there?
I've been thinking about the new Associated Press guidelines to avoid referring to #AI in ways that could imply humanness, sentience, or intent:
Don't say, "It WANTS you to enter more information," for example.
I've often used that kind of wording for computers in the past.
But more precise wording matters now because it's the first time we've widely had systems that could be mistaken for being human or having sentience, and it's important not to reinforce that idea.
OC (Mildly NSFW) Welp, I guess that's it. The single photo that got my ~10-year-old, 60k+ karma Reddit account "permanently suspended."
Boffins convert typing sounds into text with 95% accuracy (www.theregister.com)
Researchers in the UK claim to have translated the sound of laptop keystrokes into their corresponding letters with 95 percent accuracy in some cases....
AI machines aren’t ‘hallucinating’. But their makers are | Artificial intelligence (AI) | The Guardian (amp.theguardian.com)
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
OC The AI genie is here. What we're deciding now is whether we all have access to it, or whether it's a privilege afforded only to rich people, corporations, and governments.
I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and...
AI in big budget games is inevitable, say dev vets from Assassin's Creed and Everquest 2: 'Developers hate it … the money is still going to drive absolutely everybody to do it' (www.pcgamer.com)