w3c, to random
@w3c@w3c.social avatar

At our recent AC meeting in Hiroshima, Japan 🇯🇵 Sophia Wu (Ant Group) spoke on "Fighting the deepfake threats."

Generative AI has made it much easier and cheaper to create new content but raises trust concerns due to deepfake attacks. Using AI, fraudsters can craft fake videos from victims' images, enabling identity fraud. In her talk, Wu called for collaboration to tackle threats with new complementary Web standards.

🎬 Watch the video: https://youtu.be/PGBIfZSF174

gcluley, to ai
@gcluley@mastodon.green avatar

Prosecutors have filed 26 criminal charges against a political consultant responsible for a robocall that used a deepfake Joe Biden to dissuade voters from participating in New Hampshire's primary.

I wonder if we might see similar AI-powered shenanigans in the run-up to the UK's General Election?

https://edition.cnn.com/2024/05/23/politics/new-hampshire-ai-robocall-biden-charges/index.html

catileptic, to random
@catileptic@chaos.social avatar

The Romanian Ministry of Digitalization released a form that people can use to report malicious materials. What counts as malicious? Fraud, propaganda, disinformation.

We went into detail about how misguided this is, in a Romanian article over on the @apti website: https://apti.ro/ordinul-mcid-deepfake-praf-cu-iz-electoral .

But, because this matter concerns me deeply, here's a break-down in English (especially relevant if you're following the implementation): 1/4

catileptic,
@catileptic@chaos.social avatar

The Ministry doesn't have any competence to decide what propaganda & disinformation is. The end result may be that legitimate speech gets classified as malicious, or that truly harmful speech gets classified as legit. Both are dangerous.

There could be a collaboration (and knowledge sharing) between Gov. & civic society & academia, under the . But the Ro is adamant in not reaching out and involving these sectors. We also don't know which ones the competent authorities are. 3/4

catileptic,
@catileptic@chaos.social avatar

Last but not least, there is no transparency, nor feedback.

A user doesn't find out whether the content they reported was forwarded to the adequate VLOP. We don't even get to know how many reports the Ministry processed, who they forwarded the reports to and whether action was taken to remove the content.

The form asks citizens to provide an e-mail address, but doesn't mention why, doesn't indicate what it will be used for (hello, ).

It's all just dust in the wind (of elections) 4/4

glynmoody, to random
@glynmoody@mastodon.social avatar

UK engineering firm falls victim to £20m scam - https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scam-hong-kong-ai-video "Hong Kong employee was duped into sending cash to criminals by AI-generated video call" whoops - lots more like this coming...

evawolfangel, to random German
@evawolfangel@chaos.social avatar

Hab was Besonderes vorbereitet für meinen Vortrag heute Abend über Desinformation, KI und Deepfakes in Stuttgart. Mehr verrate ich jetzt noch nicht, aber es wird ein Spaß!

19 Uhr in der wunderschönen Stadtbibliothek. Kommt doch vorbei, wenn ihr in der Nähe seid - ich freue mich!

https://veranstaltungen-stadtbibliothek-stuttgart.de/index.php?zielgruppe=1&zweigstelle=ZB&id=14353

Aleenaa, to news
@Aleenaa@india.goonj.xyz avatar

Did you fell for the deepfake as well?

"Couldn't make it to the MET, had to work," Katy Perry posted on Instagram, alongside a video of herself singing in the studio — as well as two photos seemingly showing her at the gala.
https://www.npr.org/2024/05/07/1249570785/katy-perry-met-gala-deepfake?utm_medium=JSONFeed&utm_campaign=news&utm_source=press.coop

@mastodonindians

christianschwaegerl, to ArtificialIntelligence
@christianschwaegerl@mastodon.social avatar


From the NYT:
„The prosecution just played the video Trump recorded on Oct. 7, 2016, when he acknowledged that he’d said the words on the ‘Access Hollywood’ tape and he apologized for them. But within months after being elected, Trump was telling people privately that he thought the tape might be a fake. If a similar tape came out today, there is almost no doubt that Trump would claim it was artificial intelligence and many voters would likely believe that.“

filisp, to random French
@filisp@piaille.fr avatar
TechDesk, to ai
@TechDesk@flipboard.social avatar

Apple has removed a number of AI image generation apps from the App Store after they were found to be advertising the ability to create nonconsensual nude images.

Apple’s actions come following an investigation by @404mediaco, which found adverts for these apps being pushed on Instagram. The apps were only removed when 404 Media provided links to them and their related ads to Apple, indicating “the company was not able to find the apps that violated its policy itself.”

https://flip.it/rDcG7V

gcluley, to Cybersecurity
@gcluley@mastodon.green avatar

Something for the weekend?

Check out the latest "Smashing Security" podcast from yours truly and Carole Theriault, looking at Indian election deepfakery, the kindness of the Canadian rail system, Leicester's ransomware attack, and 12 Angry Men!

Thanks to our sponsors Kolide by 1Password, Vanta, and Sonrai Security for their fab support!

https://grahamcluley.com/smashing-security-podcast-369/

TechDesk, to ai
@TechDesk@flipboard.social avatar

A federal judicial panel has met in Washington, DC, to discuss the rising challenge of policing Al-generated evidence in court trials.

The eight-member panel heard from computer scientists and academics about the potential risks of Al-manipulated images and videos disrupting a trial, and will be responsible for drafting evidence-related amendments to the Federal Rules of Evidence.

Not all on the panel feel this is necessary though, writes @arstechnica, with one judge stating, ‘I'm not sure that this is the crisis that it's been painted as.”

https://flip.it/HMIl0g

jik, to Cybersecurity
@jik@federate.social avatar

I just got an email from with the subject "Introducing Voice ID from USAA". They're rolling out a new feature allowing people to be authenticated via voiceprint analysis when they call.
What could possibly go wrong?
Oh, yeah, this:
https://www.pcmag.com/news/deepfake-software-fools-voice-authentication-with-99-success-rate
Fortunately it's opt-in, and I certainly will not be.

kaiserkiwi, to microsoft
@kaiserkiwi@corteximplant.com avatar
0x0,
@0x0@corteximplant.com avatar

@kaiserkiwi in all of these scenarios, where they want to make photorealistic deep fakes, i just ask myself: why?

like, why make this? is there any benefit in this for humanity? because i still havent found a possible answer.

all this will do, once it crosses a certain threshold, is creating possibly false evidence to wrongly convict someone in court and other horrible scenarios.

marcel, to random
@marcel@waldvogel.family avatar

If you ever wanted to see(!) the Mona Lisa rap, this is for you.

For everyone else: This will be the future of even more lifelike Deepfakes.

#
https://arstechnica.com/information-technology/2024/04/microsofts-vasa-1-can-deepfake-a-person-with-one-photo-and-one-audio-track/

marcel,
@marcel@waldvogel.family avatar

Btw., here is Mona Lisa rapping. Fascinatingly cute, this little baddie! 🫣
(Let's see whether this video gets embedded… EDIT2: Attaching the video.)
https://vasavatar.github.io/VASA-1/video/o1.mp4
Source: https://vasavatar.github.io/VASA-1/

Video of Mona Lisa rapping. At the lower left, the original image (the one Leonardo actually painted) is shown.

marcel,
@marcel@waldvogel.family avatar
ErikJonker, to ai Dutch
@ErikJonker@mastodon.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • everett
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • osvaldo12
  • Youngstown
  • Durango
  • tacticalgear
  • slotface
  • rosin
  • thenastyranch
  • kavyap
  • khanakhh
  • JUstTest
  • InstantRegret
  • cubers
  • ethstaker
  • GTA5RPClips
  • cisconetworking
  • ngwrru68w68
  • modclub
  • normalnudes
  • anitta
  • tester
  • Leos
  • provamag3
  • lostlight
  • All magazines