In a group chat identifying mushrooms my tech bro friend went "just use AI".
What a great examle of the harm this automation does. The point of identifying something like plants or shrooms ourselves is not just in getting the answer, it's about what we learn along the way. Little details, differentiating factors, other species that look similar. And so much more.
With something like mushrooms this process of identifying (learning) is what keeps the mushroom picker alive, not just the ready answer. With the answer alone, we learn nothing. There is so much more to human decision making and processes than just the end point. To understand, we need to get there ourselves.
I would never trust an AI with my mushrooms, I would however always trust a seasoned picker who has spent a lifetime seeing all the variations that can happen. And I would still hesitate to eat something I am not myself 110% sure of.
Zoom Video Communications, Inc. recently updated its Terms of Service [https://explore.zoom.us/en/terms/] to encompass what some critics are calling a significant invasion of user privacy.
#AI models were perplexed by a baby giraffe without spots. They're perplexed by me, too.
This article on #disability and #ableism within #GenerativeAI is more personal than I usually write. It would mean a lot to me if you read and shared it.
I know everyone is slagging #AI content generators, but I once asked an AI to generate a #meme for me in an existing format and it created the best meme ever.
This report showing car insurance quotes are more expensive in areas of greater mixed ethnicity is further evidence of how algorithms can embed discrimination and carry real-life harms.
🔴 Changes to subject access requests will make it harder to find out what data insurance companies hold on you to challenge a decision or incorrect data.
🔴 Protections from solely automated decisions are weakened.
Increasingly finding that #AI-generated images on a blog post lowers my opinion of the author — not (just) because of a principled anti-AI stance but also simply because it’s rubbish.
If you’re happy decorating your work with generated garbage I’m just going to assume your standards are that low for everything you do.
“Just take one step back and look at the hype through this lens. All the big, exciting uses for #AI are either low-dollar (helping kids cheat on their homework, generating stock art for bottom-feeding publications) or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).
Every bubble pops eventually. When this one goes, what will be left behind?” —Cory Doctorow
> "Just as GitHub was founded on Git, today we are re-founded on Copilot."
Look, I respect the heck out of the technical implementation of LLMs, but let's be honest: statistically they produce average code at best and misunderstood/invalid code most often. They re-implement old bugs and obfuscate programmer intent and anyone who is leaning on them for more than a pair assist is making software harder for the rest of us.
Sports Illustrated has apparently been publishing AI-generated articles by “authors” who don’t exist beyond their AI-generated photos. This is a much bigger problem than most people realize, and the implications are quite dire — consider, for example, that the same thing is happening in medical and scientific journals (it is). #AI#disinformation#media#socialmedia
After a year of hype, the reality is emerging. Cloud clients aren’t buying generative AI tools because they’re expensive, lack accuracy, and it’s not clear what value they provide. Some analysts are already warning of a coming “trough of disillusionment.”
I've been thinking about the new Associated Press guidelines to avoid referring to #AI in ways that could imply humanness, sentience, or intent:
Don't say, "It WANTS you to enter more information," for example.
I've often used that kind of wording for computers in the past.
But more precise wording matters now because it's the first time we've widely had systems that could be mistaken for being human or having sentience, and it's important not to reinforce that idea.
Sam Altman’s vision for AI proliferation will require a lot more computation and the energy to power it.
He admitted it at Davos, but he said we shouldn’t worry: an energy breakthrough was coming, and in the meantime we could just use “geoengineering as a stopgap.” That should set off alarm bells.
on OpenAI's "our whole business model is yoinking copyrighted material", and some arguments i've been hearing recently from AI advocates about rolling back the scope of copyright law
Zoom's Updated Terms of Service Permit Training AI on User Content Without Opt-Out - Lemdro.id (lemdro.id)
Zoom Video Communications, Inc. recently updated its Terms of Service [https://explore.zoom.us/en/terms/] to encompass what some critics are calling a significant invasion of user privacy.