sesivany, to security
@sesivany@floss.social avatar

My mother-in-law just called me and asked if she should invest in the "new Bitcoin". She saw a video on FB where a famous Czech actor praised it on a talk show. Clearly a deepfake scam.

And she would definitely do it if she didn't have me. Some time ago I defended ad blockers as tools of online security and this is a real-life example that it's really the case.

Until Google, Meta and others get their act together, their ads should be blocked.

#adblocker #deepFakes #security

researchbuzz, to politics
@researchbuzz@researchbuzz.masto.host avatar

'Donald Trump supporters have been creating and sharing AI-generated fake images of black voters to encourage African Americans to vote Republican.

BBC Panorama discovered dozens of deepfakes portraying black people as supporting the former president.'

https://www.bbc.com/news/world-us-canada-68440150

TheEuropeanNetwork, to politics

Deepfakes, cheapfakes, and Twitter censorship mar Turkey’s elections.

As the main candidates head to a runoff, disinformation is running riot on Turkish social media.

“When internet users turned to Google to search for Kılıçdaroğlu on that day, the false news was among the top suggestions made by the algorithm."

https://www.wired.co.uk/article/deepfakes-cheapfakes-and-twitter-censorship-mar-turkeys-elections

koreapro, to random
@koreapro@federated.press avatar

's National Assembly passed a bill to ban in campaigns, with violators facing up to seven years in prison or hefty fines.

https://world.kbs.co.kr/service/news_view.htm?lang=e&Seq_Code=182572

dw_innovation, to ai
@dw_innovation@mastodon.social avatar

"How easy is it to create convincing audio deepfakes at this point?"

"It’s trivial."

"And how would you describe the skill level needed to identify AI-generated audio?"

"Very high."

https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/

ogtrekker, to art

Whoever did this, I love you. 🥳🤗👏👏👏🎊🥂

video/mp4

ginaintheburg, to Women
@ginaintheburg@mastodon.world avatar

Laurie Segall n CNN:

The problem with is that they no longer require tech savvy to produce. There are apps now that let you create them in a couple of clicks.

90% of deep fakes are sexual in nature and 99% of those victimize .

We're creating a whole new generation of victims, but we're also creating a whole new generation of abusers.

hesgen, to random

"'[Tesla's] position is that because Mr Musk is famous and might be more of a target for deep fakes, his public statements are immune,' [Judge] Pennypacker wrote, adding that such arguments would allow Musk and other famous people 'to avoid taking ownership of what they did actually say and do'."

https://www.theguardian.com/technology/2023/apr/27/elon-musks-statements-could-be-deepfakes-tesla-defence-lawyers-tell-court?CMP=Share_AndroidApp_Other

TheConversationUS, (edited ) to infosec
@TheConversationUS@newsie.social avatar

Beware of cheap fakes! Recent headlines focus on AI-driven #deepfakes, but low-tech #disinformation, like mislabeled videos from other wars, are far more common.

And just as hard to recognize as fake.

Here are some tips on how to recognize them from researchers @mikecaulfield and Sam Wineburg.
#Infosec #misinformation https://theconversation.com/no-youre-not-that-good-at-detecting-fake-videos-2-misinformation-experts-explain-why-and-how-you-can-develop-the-power-to-resist-these-deceptions-217793

TechDesk, to ai
@TechDesk@flipboard.social avatar

A New Jersey high schooler and victim of nonconsensual sexually explicit deepfakes spoke out this week and said “I’m here, standing up and shouting for change, fighting for laws so no one else has to feel as lost and powerless as I did." Currently, there are no federal laws banning the creation and distribution of nonconsensual sexually explicit deepfakes. However, there is a bill stalled in the House that could one day criminalize the creation of them.

https://flip.it/gwAkcw

aram, to random
@aram@aoir.social avatar

Tesla falsely claimed, in court, that incriminating video of EM being EM was deepfaked. There's a name for this tactic: The Liar's Dividend.

As @samgregory told me & @jesse when we interviewed him for our forthcoming book THE SECRET LIFE OF DATA, this is the greatest threat of : giving autocrats & human rights abusers plausible deniability.

Tesla's use of this tactic is...problematic. Thankfully the judge ain't buying it.

https://arstechnica.com/tech-policy/2023/04/judge-slams-tesla-for-claiming-musk-quotes-captured-on-video-may-be-deepfakes/

w7voa, to random
@w7voa@journa.host avatar
TechDesk, to TaylorSwift
@TechDesk@flipboard.social avatar

Nonconsensual sexually explicit deepfakes of global pop star Taylor Swift went viral this week on platform X. The images were viewed over 27 million times after they were shared on Wednesday. Fortunately for the singer-songwriter her many fans came to the rescue by mass-reporting the images as "Protect Taylor Swift" began to trend on X. But what about the everyday victims of these online attacks? NBC News reports:

https://flip.it/An27a2

#X

catileptic, to random
@catileptic@chaos.social avatar

Bogdan Manolea, my awesome @apti colleague and the person who has taught me a wealth about the legal part of digital rights, is featured on a podcast created by the Center for Independent Journalism in Romania.

The discussion is in Romanian and covers Not the technical side, but the legal side, the societal impact, usage during elections and, most importantly, an attitude that can get very far in thinking critically about media that might be synthetic.

https://youtu.be/0MagaJQAtLg?si=a_ydaYG1vSSo4puL

KimPerales, to ai
@KimPerales@toad.social avatar

:

How to ID an imposter in video, audio & text."

"If there's any doubt about a person’s video veracity, ask them to turn their head to the right or the left, or look backward. If the person complies but their head disappears on the video screen, end the call immediately.

It's only a matter of time before deepfakes have 3D capability - models are improving so quickly. Don’t be afraid to ask for old-fashioned “proof of life” video evidence of authenticity.
https://www.cnbc.com/2024/04/10/how-you-can-spot-an-ai-imposter-as-deepfake-technology-goes-mainstream.html

nixCraft, to random
@nixCraft@mastodon.social avatar

Many people canceled their Openai subscriptions, or it is tough to monetize stuff created with generative AI, I guess, so Sama comes with a new plan to use all those GPUs. They are now going after OF models. WTF OpenAI? They are going to allow deepfake? This company is beyond evil 👿

mjgardner,

@nixCraft @yzh has since denied they’re getting into -generated or :

• Gizmodo: “OpenAI Says It's Not Making AI Porn”: https://Gizmodo.com/openai-says-its-not-making-ai-porn-1851467771

• Quartz: “OpenAI is staying away from ”: https://Apple.news/A2qVv9XrOSUmCRI_eT8yI8A

matrig, to random
@matrig@mastodon.social avatar

If empathy is predicated on common values and experiences based on a shared reality and information, that ship is sailing fast

https://www.axios.com/2023/10/12/mcmahon-misinformation-cbs-deep-fakes-bfd

sebmeineck, to Bulgaria German
@sebmeineck@mastodon.social avatar

Nicht nur per möchte die "sexuellen Missbrauch von Kindern" bekämpfen. Mit einer neuen will die Kommission Betroffene stärken und technologischen Entwicklungen gerecht werden – inklusive „Künstlicher Intelligenz“ und . Der Überblick.

https://netzpolitik.org/2024/reaktion-auf-deepfakes-das-will-die-neue-eu-richtlinie-gegen-sexuellen-missbrauch/

paninid, to random
@paninid@mastodon.world avatar

The reason and most dudes are dismissive of the evil of .
https://mastodon.social/@taylorlorenz/112153691776246746

reillypascal, to Musicproduction
@reillypascal@hachyderm.io avatar

"What made me put my coffee down this morning was 'synthetically generating music', which apparently now falls under 'needs to disclose'.

Them’s fighting words.

Let’s say I fiddle with some knobs on my Eurorack setup (that’s a music synthesizer if you don’t know), and decide to upload it in the background of a video. Am I now making 'altered content?'"

https://vkc.sh/youtubes-bad-ai-disclosure-policy/

gimulnautti, to Russia
@gimulnautti@mastodon.green avatar

A clip from Rogozin lecture at Moscow University for future FSB recruits.

He goes up to the point of saying that the key to victory in a war is discrediting the opponent’s political leadership.

They are to be made to seem untrustworthy in the eyes of their people by accusing of them… and he says this first..

ELITISM and corruption.

Are the bells ringing yet for you? Sound familiar perhaps? You recognise anyone?

https://youtu.be/MCG62ed3J5Y?si=Vdc32Doz63SSDP3X

z_everson, to ai
@z_everson@journa.host avatar
SubtleBlade, to ai
@SubtleBlade@mastodon.scot avatar

Can be policed to prevent explicit of children?

As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?
https://www.theguardian.com/technology/2024/apr/23/can-ai-image-generators-be-policed-to-prevent-explicit-deepfakes-of-children

mediabiasfactcheck, to random
@mediabiasfactcheck@mastodon.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines