EyeEm, a photo-sharing community once thought to be a potential challenger to Instagram, is now licensing its users’ photos to train AI models, Tech Crunch reports. The company gave users 30 days to opt out. As you can imagine, many of them are upset. Read more: https://flip.it/Yv2xV3 #Tech#Technology#AI#ArtificialIntelligence#Photography
My first troublesome hallucination with a #LLM in a while: #Claude3#Opus (200k context) insisting that I can configure my existing #Yubikey#GPG keys to work with PKINIT with #Kerberos and helping me for a couple of hours to try to do so — before realising that GPG keys aren't supported for this use case. Whoops.
No real bother other than some wasted time, but a bit painful and disappointing.
They use #OpenAI, which means my GitHub OSS has almost certainly been used in training data.
They rely on OpenAI's promise to not ingest any code that is used for "context".
They specifically do not disclaim that their tool could result in me violating someone else's copyright, and they could suggest the same code to someone else, too.
Uninstall this crap, now. It's dangerous and irresponsible
Unless someone can find the related survey, this is an example of what you shouldn't do.
This story has been making the rounds. It's based on a CyberArk survey. And yet, not a single one of these posts link to the research. After much poking, I found a survey reference dating back to June, 2023 🤔. Unfortunately, CyberArk's account here is not active. #AI#infosec#cybersecurity
Back in 2018, Dario Amodei worked at OpenAI. And looking at one of its first A.I. models, he wondered: What would happen as you fed an artificial intelligence more and more data? He and his colleagues decided to study it, and they found that the A.I. didn’t just get better with more data; it got better exponentially.
#AI#GenerativeAI#SyntheticMedia#DeepFakes: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.
Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.
But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/
The internet is in decay [due to AI]. According to my guest today, Nilay Patel, this isn’t just a blip, as the big platforms figure out how to manage this. He believes that A.I. content will break the internet as we know it.
@Snowshadow I think painting it off as a binary is the first mistake one can make on the path of criticising #ai
AI does not have a mind of it’s own. Even if Silicon Valley tech bros are adamant about building that god for themselves, what you’re looking at are tools wielded by humans.
Humans make moral choices that can be challenged. AI emulates moral choices, but it cannot be challenged as its not conscious.
Bots should not be suspect to liberty or freedom of speech either.
I've been trialling GitHub Copilot recently at work and, having been generally skeptical of the golden mountains promised by AI hype guys, I have to say that it gave me a modest efficiency gain in some scenarios. I would miss not having it, much like I would miss not having autocomplete.
I'll probably write up a blog for hgrsd.nl with a few thoughts of where it was helpful for me.
It gets worse. Think of it in the context of MS pushing Copilot harder than ever before with Win11 24H2. Think of it in the context of every Big Tech and major application trying to jump on the #AI bandwagon without any legislative guardrails or training wheels.