nixCraft, to random
@nixCraft@mastodon.social avatar

Many people canceled their Openai subscriptions, or it is tough to monetize stuff created with generative AI, I guess, so Sama comes with a new plan to use all those GPUs. They are now going after OF models. WTF OpenAI? They are going to allow deepfake? This company is beyond evil 👿

mjgardner,
@mjgardner@social.sdf.org avatar

@nixCraft @yzh #OpenAI has since denied they’re getting into #AI-generated #porn or #deepfakes:

• Gizmodo: “OpenAI Says It's Not Making AI Porn”: https://Gizmodo.com/openai-says-its-not-making-ai-porn-1851467771

• Quartz: “OpenAI is staying away from #pornography”: https://Apple.news/A2qVv9XrOSUmCRI_eT8yI8A

ericfreyss, to random French
@ericfreyss@mastodon.social avatar

Cette semaine je vous recommande l'écoute du Podcast "Smashing Security" @smashingsecurity par @gcluley et Carole Theriault qui nous parlent notamment cette semaine des dans le cadre des élections en (à partir de 14mn40s)
https://player.captivate.fm/episode/0a1b002e-d858-4c94-a823-9a2f897711a7?t=880

mediabiasfactcheck, to random
@mediabiasfactcheck@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "AI propaganda is here. But is it persuasive? Recent research published in PNAS Nexus and conducted by Tomz, Josh Goldstein from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student Jason Chao, research scholar Shelby Grossman, and lecturer Alex Stamos—examined the effectiveness of AI-generated propaganda.

They found, in short, that it works."

https://hai.stanford.edu/news/disinformation-machine-how-susceptible-are-we-ai-propaganda?utm_source=pocket_saves

thejapantimes, to worldwithoutus
@thejapantimes@mastodon.social avatar

As India's election intensifies, manipulated videos involving top aides of Prime Minister Narendra Modi have sparked police investigations and led to the arrest of several Congress party workers. https://www.japantimes.co.jp/news/2024/05/06/asia-pacific/politics/fake-videos-modi-india-election/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

SubtleBlade, to ai
@SubtleBlade@mastodon.scot avatar

Can be policed to prevent explicit of children?

As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?
https://www.theguardian.com/technology/2024/apr/23/can-ai-image-generators-be-policed-to-prevent-explicit-deepfakes-of-children

ai6yr, to ai

Arstechnica: "Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

YouTube videos of 6K celebrities helped train AI model to animate photos in real time."
https://arstechnica.com/information-technology/2024/04/microsofts-vasa-1-can-deepfake-a-person-with-one-photo-and-one-audio-track/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

z_everson, to ai
@z_everson@journa.host avatar
thejapantimes, to worldnews
@thejapantimes@mastodon.social avatar

As the technology to produce personalized videos and chatbots advances and spreads, experts worry that voters will have an increasingly difficult time distinguishing between real and synthetic messages. https://www.japantimes.co.jp/news/2024/04/18/world/politics/ai-tools-india-elections/

KimPerales, to ai
@KimPerales@toad.social avatar

:

How to ID an imposter in video, audio & text."

"If there's any doubt about a person’s video veracity, ask them to turn their head to the right or the left, or look backward. If the person complies but their head disappears on the video screen, end the call immediately.

It's only a matter of time before deepfakes have 3D capability - models are improving so quickly. Don’t be afraid to ask for old-fashioned “proof of life” video evidence of authenticity.
https://www.cnbc.com/2024/04/10/how-you-can-spot-an-ai-imposter-as-deepfake-technology-goes-mainstream.html

gimulnautti, to Russia
@gimulnautti@mastodon.green avatar

A clip from Rogozin lecture at Moscow University for future FSB recruits.

He goes up to the point of saying that the key to victory in a war is discrediting the opponent’s political leadership.

They are to be made to seem untrustworthy in the eyes of their people by accusing of them… and he says this first..

ELITISM and corruption.

Are the bells ringing yet for you? Sound familiar perhaps? You recognise anyone?

https://youtu.be/MCG62ed3J5Y?si=Vdc32Doz63SSDP3X

HistoPol, to ai
@HistoPol@mastodon.social avatar

via

"’s voice cloning AI model only needs a 15-second sample to work

/

Called Voice Generation, the model has been in development since late 2022 and powers the Read Aloud feature in ."

"# The AI-generated voice can read out text prompts on command in the same language as the speaker or in a number of other languages.👈"
"OpenAI told the publication the model will only be available to about 10 developers."

https://www.theverge.com/2024/3/29/24115701/openai-voice-generation-ai-model

rohini, to random
@rohini@mastodon.social avatar

Thread 🧵: My new paper just got published on Take Back The Tech: https://www.takebackthetech.net/blog/non-consensual-intimate-imagery-overview

Link with DOI: https://zenodo.org/records/10980145

This paper on non-consensual intimate images () covers a global south perspective. Stuff that is necessary, in my view:

  1. Country-specific versions of this, more urgently for the global south. For example, this resource for Latin American countries: https://acoso.online/

  2. Translations of the global and country-specific papers.

rohini,
@rohini@mastodon.social avatar

3, Consolidating practitioner knowledge and putting it to work. Perhaps training, exchanging tips and strategies and physical and digital safety techniques. Awareness of relatively more recent threats such as stalkerware, spouseware and “pornographic” . And more.

Time is often at a premium when supporting victims and it's important to act in a coordinated way that doesn’t harm the victim further.

  1. Interventions at the law and policy levels.
sebmeineck, to Bulgaria German
@sebmeineck@mastodon.social avatar

Nicht nur per möchte die "sexuellen Missbrauch von Kindern" bekämpfen. Mit einer neuen will die Kommission Betroffene stärken und technologischen Entwicklungen gerecht werden – inklusive „Künstlicher Intelligenz“ und . Der Überblick.

https://netzpolitik.org/2024/reaktion-auf-deepfakes-das-will-die-neue-eu-richtlinie-gegen-sexuellen-missbrauch/

reillypascal, to Musicproduction
@reillypascal@hachyderm.io avatar

"What made me put my coffee down this morning was 'synthetically generating music', which apparently now falls under 'needs to disclose'.

Them’s fighting words.

Let’s say I fiddle with some knobs on my Eurorack setup (that’s a music synthesizer if you don’t know), and decide to upload it in the background of a video. Am I now making 'altered content?'"

https://vkc.sh/youtubes-bad-ai-disclosure-policy/

paninid, to random
@paninid@mastodon.world avatar

The reason and most dudes are dismissive of the evil of .
https://mastodon.social/@taylorlorenz/112153691776246746

sesivany, to security
@sesivany@floss.social avatar

My mother-in-law just called me and asked if she should invest in the "new Bitcoin". She saw a video on FB where a famous Czech actor praised it on a talk show. Clearly a deepfake scam.

And she would definitely do it if she didn't have me. Some time ago I defended ad blockers as tools of online security and this is a real-life example that it's really the case.

Until Google, Meta and others get their act together, their ads should be blocked.

#adblocker #deepFakes #security

CelloMomOnCars, to random
@CelloMomOnCars@mastodon.social avatar

How to spot
Disinformation
Misinformation
Fake news

Useful in elections and when you come across climate change denial, covid denial, vaccine denial, etc.

https://www.eff.org/deeplinks/2024/03/protect-yourself-election-misinformation

CelloMomOnCars,
@CelloMomOnCars@mastodon.social avatar

"With AI , a candidate’s image can be smeared, or softened. Voters can be steered toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to , experts say, is that a surge of AI deepfakes could erode the public’s trust in what they see and hear."

https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd

meneceur, to random French
@meneceur@piaille.fr avatar
Nonilex, to ai
@Nonilex@masto.ai avatar

will be the 1st in the fed govt to roll out a plan to integrate into a variety of uses, from to helping survivors.
DHS has seen the opportunities & risks of firsthand. It found a victim by using AI to create an image of the child 10yrs older. But it has also been tricked into investigations by AI created .


https://www.nytimes.com/2024/03/18/business/homeland-security-artificial-intelligence.html?smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb

Rasta, to ai
@Rasta@mstdn.ca avatar

I watched a video recently:
I'd recommend you watch it.
Here's the link, if you have time, it's not short, but it's detailed about AI and where it is, how it started and where it's headed faster than we can comprehend. We're all being sampled in every TikTok video produced for AI to replicate @jaxwxboss

https://www.youtube.com/watch?v=cB0_-qKbal4

protonmail, to random
@protonmail@mastodon.social avatar

This year is democracy's biggest test in history. Here are our seven predictions for the internet in 2024.


1/9

protonmail,
@protonmail@mastodon.social avatar

4️⃣ Disinformation campaigns

With the help of , are becoming more convincing and are being used by fraudsters and official campaigns alike.

Unfortunately, regulation is constantly playing catch-up with technology. Better AI regulation, public education on media literacy, and access to trustworthy, independent media are required to combat this issue.
5/9

glynmoody, to random
@glynmoody@mastodon.social avatar
researchbuzz, to Youtube
@researchbuzz@researchbuzz.masto.host avatar

"A couple of months ago, [Olga] Loiek started a YouTube channel where she talks about mental health and shares her philosophies about life. However, shortly after that, she started receiving messages from followers telling her that they had seen her on Chinese social media. There, she's not Olga Loiek but a Russian woman who speaks Mandarin, loves China and wants to marry a Chinese man."

https://www.voanews.com/a/ukrainian-youtuber-finds-her-ai-clone-selling-russian-goods-on-chinese-internet-/7509009.html

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • ngwrru68w68
  • hgfsjryuu7
  • InstantRegret
  • Youngstown
  • rosin
  • slotface
  • everett
  • mdbf
  • ethstaker
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • PowerRangers
  • Leos
  • magazineikmin
  • Durango
  • cubers
  • GTA5RPClips
  • cisconetworking
  • tacticalgear
  • osvaldo12
  • khanakhh
  • vwfavf
  • tester
  • modclub
  • normalnudes
  • provamag3
  • All magazines