@ErikJonker “AI is for a large part opensource” means nothing. Which AI? Which tools? Which models?
Llama 2, Bard, Claude 2, GPT-4 are not open source at all.
It’s time to be lucid: this type of tools don’t help users, kill the planet and are based on stolen materials only to make some companies earn more money. That’s all.
@pylapp ... we can't enforce that this technology can and will not be used, that's a fantasy. A lot is closed source but also a part is opensource, everybody can implement and use it on their own.
Although companies have created detectors to help spot #deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.
A team of researchers discovered new methods that improve both the fairness and the accuracy of these detection algorithms by teaching them about human diversity
#AI#GenerativeAI#Propaganda#DeepFakes#Disinformation: "AI propaganda is here. But is it persuasive? Recent research published in PNAS Nexus and conducted by Tomz, Josh Goldstein from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student Jason Chao, research scholar Shelby Grossman, and lecturer Alex Stamos—examined the effectiveness of AI-generated propaganda.
#AI#GenerativeAI#SyntheticMedia#DeepFakes: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.
Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.
But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/
@tehstu "THIS IS SO COOL, LOOK, WE KNOW HOW TO DEEPFAKE PEOPLE USING JUST A PHOTO AND A SNIPPET OF AUDIO!" "Hey, guys, this is going to result in widespread abuse of people's images, phishing and scams, and throw lots of people out of work" "YEAH, BUT IT'S SO COOL! LOOK AT WHAT WE CAN DO!"
How to ID an #AI imposter in video, audio & text."
"If there's any doubt about a person’s video veracity, ask them to turn their head to the right or the left, or look backward. If the person complies but their head disappears on the video screen, end the call immediately.