itnewsbot, to AdobePhotoshop
@itnewsbot@schleuss.online avatar

Imran Khan — victory speech from jail shows AI genius - Although the technology has previously been employed during Pakistan’s notoriously... - https://readwrite.com/imran-khan-victory-speech-from-jail-shows-ai-genius/

cybernews, to Cybersecurity
itnewsbot, to news
@itnewsbot@schleuss.online avatar

Further action required on deepfakes, says British self-governing island - Government officials on the Isle of Man are pushing for a broader policy on deepfa... - https://readwrite.com/further-action-required-on-deepfakes-says-british-self-governing-island/

itnewsbot, to politics
@itnewsbot@schleuss.online avatar

Robocalls made with AI-generated voices are now illegal - The Federal Communications Commission (FCC) has outlawed robocalls containing voic... - https://readwrite.com/robocalls-made-with-ai-generated-voices-are-now-illegal/

AAKL, to Cybersecurity
@AAKL@noc.social avatar

Metadata can be edited.

wants to fight with a special badge https://www.popsci.com/technology/google-deepfake-ai-badge/

dw_innovation, to ai
@dw_innovation@mastodon.social avatar

"How easy is it to create convincing audio deepfakes at this point?"

"It’s trivial."

"And how would you describe the skill level needed to identify AI-generated audio?"

"Very high."

https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/

itnewsbot, to ai
@itnewsbot@schleuss.online avatar

Meta will label AI-generated content from OpenAI and Google on Facebook, Instagram - Enlarge (credit: Meta / Getty Images)

On Tuesday, Meta announc... - https://arstechnica.com/?p=2001313 #ai-generatedimages #machinelearning #imagesynthesis #videosynthesis #socialmedia #deepfakes #aiethics #biz#meta #ai

AlexJimenez, to Cybersecurity
@AlexJimenez@mas.to avatar
bsi, to TaylorSwift German
@bsi@social.bund.de avatar

Taylor Swift stand zuletzt wegen Nacktfoto-Deepfakes auf den Trendlisten der Social-Media-Plattformen ganz oben. Ein gefährliches Phänomen – nicht nur für . Denn theoretisch kann jede Person Opfer einer solchen Manipulation werden.

Wie man erkennen kann, zeigen wir euch hier 👉 https://www.bsi.bund.de/dok/1009560

itnewsbot, to ArtificialIntelligence
@itnewsbot@schleuss.online avatar

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” - Enlarge (credit: JasonDoiy | iStock Unreleased)

A fake video m... - https://arstechnica.com/?p=2001056

jotbe, to ai
@jotbe@chaos.social avatar
itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Deepfake scammer walks off with $25 million in first-of-its-kind AI heist - Enlarge (credit: Getty Images / Benj Edwards)

On Sunday, a rep... - https://arstechnica.com/?p=2000988

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Slate: Fake nude images aren’t an entirely new issue. What’s the history of this problem?

Sophie Maddocks: There’s a historian, Jessica Lake, and she’s done some really interesting research tracing the potential origins of the creation of fake nude images. She talks a lot about the rise of photography in the late 19th century, and writes about an example of face-swapping in late-19th-century photography where images of the faces of high-society women were pasted onto nude bodies and then circulated. And not only is that one possible starting point when thinking about the history of fake nudes, it’s also an interesting starting point for how we see the creation of A.I.–generated fake nudes. Fake nudes first went viral in the online sense in 2017 with the creation of the DeepNude app where the faces of individuals were digitally pasted onto the bodies of adult film actors, almost exactly mimicking what had been done in the late 19th century with photography.

So there is a long history to this harm, but I think there is that long-standing desire to produce fake nude images—almost exclusively of women. With the rise of the internet, we’ve seen ways of creating and sharing ever more photorealistic images—until we get to the last year with the rise of video- and image-generation models that create extremely realistic imagery and A.I. tools trained on millions of images of girls and women scraped from the internet without their consent. You can either use a text prompt or an existing image to produce a very realistic fake nude.

So A.I. has increased the volume and severity of this problem on the internet.

Absolutely. In 2017, when activists and the first people affected by A.I.–assisted deepfakes, like famous actors and singers, started to raise the alarm about this issue, they really gave us a roadmap for what would happen."

https://slate.com/technology/2024/01/taylor-swift-deepfake-porn-cyber-violence-abuse-research.html?mc_cid=6ff095fc03

alexanderhay, to news
@alexanderhay@mastodon.social avatar

Whenever I hear the phrase 'move on', what I hear is 'this is inconvenient for those in authority'. Those little shits should have been expelled and prosecuted.

"...One 14-year-old girl told the NSPCC’s ChildLine service last year that a group of boys made fake explicit sexual images of her and other girls and sent them to group chats. The boys were excluded... but returned, and the girls were told to move on, which they struggled to do..."

https://www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Microsoft has introduced more protections to Designer, an AI text-to-image generation tool that people were using to make nonconsensual sexual images of celebrities. Microsoft made the changes after 404 Media reported that the AI-generated nude images of Taylor Swift that went viral on Twitter last week came from 4chan and a Telegram channel where people were using Designer to make AI-generated images of celebrities.

"We are investigating these reports and are taking appropriate action to address them," a Microsoft spokesperson told us in an email on Friday. "Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service. We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.”"

https://www.404media.co/microsoft-closes-loophole-that-created-ai-porn-of-taylor-swift/

TechDesk, to TaylorSwift
@TechDesk@flipboard.social avatar

Nonconsensual sexually explicit deepfakes of global pop star Taylor Swift went viral this week on platform X. The images were viewed over 27 million times after they were shared on Wednesday. Fortunately for the singer-songwriter her many fans came to the rescue by mass-reporting the images as "Protect Taylor Swift" began to trend on X. But what about the everyday victims of these online attacks? NBC News reports:

https://flip.it/An27a2

#X

claesdevreese, to ai
@claesdevreese@mastodon.social avatar

We should not be alarmist about AI and elections. But we should also not be naive and pretend the techniques are not there and bad actors would not deploy them. Exhibit #...

https://www.ft.com/content/bd75b678-044f-409e-b987-8704d6a704ea?sharetype=blocked

DemLabs, to ai
@DemLabs@newsie.social avatar
thejapantimes, to worldnews
@thejapantimes@mastodon.social avatar

The emergence of a deepfake audio of U.S. President Joe Biden in the lead-up to U.S. elections has raised alarm among experts concerned about how doctored messages like this might sway the polls. https://www.japantimes.co.jp/news/2024/01/23/world/politics/deepfake-audio-biden-us-elections/?utm_content=buffer48ba0&utm_medium=social&utm_source=mastodon&utm_campaign=bffmstdn

ppatel, to ai
@ppatel@mstdn.social avatar

Nope. No one saw this coming!

How has been helping criminals who use and voice cloning for financial scams, forcing banks and to invest in AI to counter fraud.

https://t.co/w8KFzHnDF0

TechDesk, to ai
@TechDesk@flipboard.social avatar

A New Jersey high schooler and victim of nonconsensual sexually explicit deepfakes spoke out this week and said “I’m here, standing up and shouting for change, fighting for laws so no one else has to feel as lost and powerless as I did." Currently, there are no federal laws banning the creation and distribution of nonconsensual sexually explicit deepfakes. However, there is a bill stalled in the House that could one day criminalize the creation of them.

https://flip.it/gwAkcw

simsus, to microsoft German
@simsus@social.tchncs.de avatar
crazy2bike,

@ubo @simsus

Ich bleibe bei KI dabei:
Die Geister, die wir riefen, die werden wir nicht mehr los.

Das bringt mehr Ärger, , , , als es je nutzen kann.

Wenn man die Kosten für o.g. u.a. aufsummiert, wird es selbst ökonomisch mehr kosten als einsparen.

thejapantimes, to business
@thejapantimes@mastodon.social avatar
nono2357, to random French
br00t4c, to DadBin
@br00t4c@mastodon.social avatar

When called down to the principal's office last October, high school student Francesca Mani learned that someone had taken online pictures of her and used artificial intelligence to generate fake nudes that were then shared on social media.

https://www.cbc.ca/news/canada/education-curriculum-sexual-violence-deepfake-1.7073380

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines