@protonprivacy is sending a survey in which they ask if we want them to add #AI in their product.
As someone who uses AI on a daily basis, I beg you, please, guys, don’t.
Nobody needs AI to read or send encrypted emails, to connect to a VPN, to add an event to a calendar, to create and store credentials, to backup or share encrypted files.
Don’t become dumb followers of this sinking world.
Fix your shitty autocorrect! There’s no such thing at “there’re” so quit putting it into my content.
And how come I get a word suggestion as I type, I click on it, and an entirely different word is inserted that wasn’t even one of the options offered - sometimes not even an English word?!
Kevin Roose so desperately wants to live in the futures tech companies are selling that he’ll eagerly do their PR for them and buy into whatever illusions of intelligence they put in front of him so he can trick himself into believing they’ll actually be realized this time.
“I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer.”
They are completely missing the point of their product. #ChatGPT and other #ai#chatbots are not our friend, and we don’t want them to be!
(If you spend a lot of time „talking to“ chatbots, please go outside where people are. Seriously!)
It is a tool we use for super annoying tasks —write an ALT-tag for an image, fix my shell script, translate something. And we still have to double check everything it does. We can’t and we won’t trust it. (1/2)
#Adobe used images from #Midjourney to train its “ethical” #AI. Now, it’s using Ansel Adams’ name and artistic style to in its tools. Adobe’s position against using unlicensed works to train LLMs was never one of principle. Rather, it already had access to copyrighted works, so “ethical” was a means to exclude competition while concurrently violating the principle anyway. https://mastodon.social/@verge/112552260437207900
From eating rocks to putting glue on pizza, Google’s AI Overviews has given us a good laugh and plenty of memes over the past week, thanks to the many hilariously inaccurate answers it has given to several search queries.
However, these clearly wrong answers are not the problem we should be focusing on, argues @FastCompany. “It’s the errors that don’t call attention to their ridiculous selves that could do the most damage to Google Search and everyone who relies on it.” Here’s more.
Sony Will Use AI to Cut Film Costs, Says CEO Tony Vinciquerra
"“We are very focused on AI. The biggest problem with making films today is the expense,” Vinciquerra said at Sony’s Thursday (Friday in Japan) investor event. “We will be looking at ways to…produce both films for theaters and television in a more efficient way, using AI primarily.”"
The tricky thing about being a company that I don't trust is that even when you try to clarify something you need to be aware that specificity can sound suss.
"Adobe does not train Firefly Gen AI models on customer content."
The way this is worded does not say your content is not used in training AI models, just that it's - SPECIFICALLY - not used to train "Firefly Gen AI models."
People are afraid of all sorts of outlandish AI apocalypse scenarios, but the only one I'm scared of (besides the massive increase in CO₂ emissions) is the facilitation of mass surveillance. The only thing LLMs are really good at is summarising. The thing that your local conspiracy nut has been afraid of all these years — that everyone is being personally watched at all times — has never before been practical. Now it is.
"OpenAI is — according to multiple reports — not only looking to create its own personal computing devices, they’re considering partnering with Jony Ive and LoveFrom to do it. They’re setting themselves up to be frenemies with Apple before the first partnership is even announced."
Google went down this "frenemies" path before Android existed. Interesting take on OpenAI as the new frenemy. I buy it.
Mourners can now speak to an #AI version of the dead. But will that help with grief? Selina Sykes reports. FRANCE24's Monte Francis speaks to Tomasz Hollanek, Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge about the benefits, risks and future developments of AI clones. #death#bereavement#mentalhealth#tech
Quelle surprise. But what's worrying is how apparently 'tech 'n meeja' savvy young people are so easily taken in by propaganda - err, I mean 'hype and marketing'.
"...Very few people are regularly using "much hyped" artificial intelligence (AI) products like ChatGPT, a survey suggests...
"...But the study... says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech..."