pylapp, to ai
@pylapp@framapiaf.org avatar

Some parts of #AI domain start #privacy nightmares.

Some parts of AI like #GenAi are often based on stolen copyrighted, #opensource and #FLOSS projects, and personal materials.

Big Tech companies can provide hazardous softwares, helping creating #deepfakes.

#Microsoft with #Copilot and #Recall, #OpenAI with #ChatGPT are these ones.
Their softwares are a danger for privacy, copyrights and users.

Get rid of that shit!
Get rid of all this AI-bullshitted-marketing useless services!

pylapp,
@pylapp@framapiaf.org avatar

@ErikJonker “AI is for a large part opensource” means nothing. Which AI? Which tools? Which models?

Llama 2, Bard, Claude 2, GPT-4 are not open source at all.

It’s time to be lucid: this type of tools don’t help users, kill the planet and are based on stolen materials only to make some companies earn more money. That’s all.

ErikJonker,
@ErikJonker@mastodon.social avatar

@pylapp ... we can't enforce that this technology can and will not be used, that's a fantasy. A lot is closed source but also a part is opensource, everybody can implement and use it on their own.

TheConversationUS, to technology
@TheConversationUS@newsie.social avatar

Although companies have created detectors to help spot , studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.

A team of researchers discovered new methods that improve both the fairness and the accuracy of these detection algorithms by teaching them about human diversity

https://theconversation.com/deepfake-detection-improves-when-using-algorithms-that-are-more-aware-of-demographic-diversity-226061
@blackmastodon

thejapantimes, to worldnews
@thejapantimes@mastodon.social avatar

An increasing number of foreign actors beyond Russia and China are seeking to influence U.S. elections, officials told a Senate committee. https://www.japantimes.co.jp/news/2024/05/16/world/politics/foreign-attempts-us-elections/

cs, to random
@cs@mastodon.sdf.org avatar
ericfreyss, to random French
@ericfreyss@mastodon.social avatar

Cette semaine je vous recommande l'écoute du Podcast "Smashing Security" @smashingsecurity par @gcluley et Carole Theriault qui nous parlent notamment cette semaine des dans le cadre des élections en (à partir de 14mn40s)
https://player.captivate.fm/episode/0a1b002e-d858-4c94-a823-9a2f897711a7?t=880

gcluley,
@gcluley@mastodon.green avatar

@ericfreyss @smashingsecurity Thank you for your kind words!

mediabiasfactcheck, to random
@mediabiasfactcheck@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "AI propaganda is here. But is it persuasive? Recent research published in PNAS Nexus and conducted by Tomz, Josh Goldstein from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student Jason Chao, research scholar Shelby Grossman, and lecturer Alex Stamos—examined the effectiveness of AI-generated propaganda.

They found, in short, that it works."

https://hai.stanford.edu/news/disinformation-machine-how-susceptible-are-we-ai-propaganda?utm_source=pocket_saves

thejapantimes, to worldwithoutus
@thejapantimes@mastodon.social avatar

As India's election intensifies, manipulated videos involving top aides of Prime Minister Narendra Modi have sparked police investigations and led to the arrest of several Congress party workers. https://www.japantimes.co.jp/news/2024/05/06/asia-pacific/politics/fake-videos-modi-india-election/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

SubtleBlade, to ai
@SubtleBlade@mastodon.scot avatar

Can be policed to prevent explicit of children?

As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?
https://www.theguardian.com/technology/2024/apr/23/can-ai-image-generators-be-policed-to-prevent-explicit-deepfakes-of-children

ai6yr, to ai

Arstechnica: "Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

YouTube videos of 6K celebrities helped train AI model to animate photos in real time."
https://arstechnica.com/information-technology/2024/04/microsofts-vasa-1-can-deepfake-a-person-with-one-photo-and-one-audio-track/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

tehstu,
@tehstu@hachyderm.io avatar

@ai6yr > The researchers say that, for privacy reasons, each example photo on their page was AI-generated by StyleGAN2 or DALL-E 3

Which presumably use so many stolen photos it's effectively private, right Big Tech? Problem solved!

Good grief. Who is asking for any of this bullshit.

ai6yr,

@tehstu "THIS IS SO COOL, LOOK, WE KNOW HOW TO DEEPFAKE PEOPLE USING JUST A PHOTO AND A SNIPPET OF AUDIO!" "Hey, guys, this is going to result in widespread abuse of people's images, phishing and scams, and throw lots of people out of work" "YEAH, BUT IT'S SO COOL! LOOK AT WHAT WE CAN DO!"

z_everson, to ai
@z_everson@journa.host avatar
thejapantimes, to worldnews
@thejapantimes@mastodon.social avatar

As the technology to produce personalized videos and chatbots advances and spreads, experts worry that voters will have an increasingly difficult time distinguishing between real and synthetic messages. https://www.japantimes.co.jp/news/2024/04/18/world/politics/ai-tools-india-elections/ #worldnews #politics #india #ai #deepfakes #misinformation #fakenews

KimPerales, to ai
@KimPerales@toad.social avatar

:

How to ID an imposter in video, audio & text."

"If there's any doubt about a person’s video veracity, ask them to turn their head to the right or the left, or look backward. If the person complies but their head disappears on the video screen, end the call immediately.

It's only a matter of time before deepfakes have 3D capability - models are improving so quickly. Don’t be afraid to ask for old-fashioned “proof of life” video evidence of authenticity.
https://www.cnbc.com/2024/04/10/how-you-can-spot-an-ai-imposter-as-deepfake-technology-goes-mainstream.html

gimulnautti, to Russia
@gimulnautti@mastodon.green avatar

A clip from Rogozin lecture at Moscow University for future FSB recruits.

He goes up to the point of saying that the key to victory in a war is discrediting the opponent’s political leadership.

They are to be made to seem untrustworthy in the eyes of their people by accusing of them… and he says this first..

ELITISM and corruption.

Are the bells ringing yet for you? Sound familiar perhaps? You recognise anyone?

https://youtu.be/MCG62ed3J5Y?si=Vdc32Doz63SSDP3X

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines