So they're going to crapify search even more while trying to retain their #advertising edge.
WSJ reports that #Google plans to make #search more "visual, snackable, personal, and human", incorporating short videos, social media posts, and conversations with #AI.
So, the writers' guild is on strike, and one of the reasons is that they don't want to be demoted to simply polishing off crappy first drafts done by #AI
Just realized that I am doing the same these days. I do a LOT of code reviews and I can tell that the quality of code has gone down for people that use #copilot or other AI tools.
AI coding tools seems to be a way to accelerate you own speed of work, pushing the burden onto your colleagues to do the boring parts. 🤷
Demystifying #AI#Safety: A curated list of papers for a safer, ethical & more reliable AI
Hello Fosstodon, with exciting news! 📢
We are thrilled to announce the launch of our new #opensource project, Awesome AI Safety. Our mission is to democratize AI Safety by providing access to the latest relevant papers to make AI models more reliable, robust, explainable, ethical & accurate.
Help us achieve that goal by sharing this repository with your favorite #ML researchers 💌 https://gisk.ar/44st3le
'The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation.'
Yup. #technology#ArtificialIntelligence#ai
If you dabble in #AI and #LLM at all, please read this leaked analysis by a Google researcher.
It would indeed be wonderful if near-future, incremental AI training can be done cheaply and in open-source form, outside of the corporate IP regime. I think this may lead to the widespread use of smaller, special-purpose, open-sourced models which will indeed democratize the benefits of the technology.
The dark side is unfortunately the obfuscation of source material, and the outright theft of human creativity which will be laundered through the retraining process; we will be building open-sourced models atop stolen works.
These Nvidia researchers are SO proud of creating #AI tools to generate fake faces... the scourge of Twitter's #disinformation networks. Previously, you could usually detect GAN generated faces by the fixed eye location, but they have eliminated that "problem". I'm unaware of any other legit use for GAN, except for creating fake people for social networks/LinkedIn/whatever scam you are working... https://d1qx31qr3h6wln.cloudfront.net/publications/StyleGAN3D_preprint.pdf
@ai6yr Sadly, they’re using them to replace models in ads. Specifically WOC. Meet the new “super” models. Companies can look like they support diversity with out actually paying a POC. #BlackMastodon#AI
Here is a better interview of Geoffrey Hinton on PBS News Hour, where he articulates his concerns better and in more words. IMO there are still huge leaps of logic in his belief that these systems have "understanding" and can be dangerous on their own, i.e. without humans programming and setting goals for them, which gets the tech giants creating them (and gaining enormous political power) off the hook, as if #AI suddenly dropped from the sky.
Heads-up! The Magi leaks begin. Google Search will be changing soon, with a departure from 10 blue links -> Google plans to make search more “visual, snackable, personal, & human”, incorporating short videos, social media posts, & conversations w/AI
"Search visitors might be more frequently prompted to ask follow-up questions or swipe through visuals such as TikTok videos in response to queries." And, "Google could further emphasize forum posts & short videos..."
And more (as expected): "At its I/O conference this week, the search giant is expected to debut new features that allow users to carry out conversations w/an artificial-intelligence program, code-named “Magi,” said others familiar w/the matter."
Notice Google's focus on younger users throughout the article. This week should be very interesting with I/O announcements. :)
What are you staring at?
Move on, nothing to see here!
While humans are very gullible when it comes to language, realizing how little "understanding" those LLMs have about any concept at all is easier using vision. We tolerate way fewer visual errors than we do language errors, and the AI scammers know that.
A #LLM does not chat. It continues chat protocols with likely words.
Protocols! Including the speaker marks, which are just another type of words.
One can obviously build a chat application on top of this in the spirit of the Mechanical Turk. Just in reverse: not a human in the thing, but a thing in the intelligence. It is a generic text, not a generic speaker.
No doubt that this text continuation makes a lot of sense to us.
It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the “gullibility gap.” In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it made up sexual harassment charges. You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine. It’s from Microsoft. We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem."