MedievalMideast, to ai

just changed their terms and conditions to include using anyone's video and audio for training with no option for opting out. You too can help train s!

Living with a disabled spouse, I used Zoom a lot to get through the ongoing global pandemic.

What alternatives are out there for remote teaching/meetings?

antygon, to ai Polish
@antygon@pol.social avatar

Uuuu, gruba kampania. Bez zakładników. Ale chyba tak trzeba.

T-Mobile w bardzo bezpośredni sposób pokazuje do czego może prowadzić wrzucanie zdjęć do sieci. Ty na przykładzie dzieci…

https://youtu.be/F4WZ_k0vUDM

kaffeeringe, to random German
@kaffeeringe@social.tchncs.de avatar

Ich glaube, dieses -Video vom "Zentrum für politische Schönheit" hat eine Büchse der Pandora geöffnet. Wenn die das für ihre Zwecke nutzen, dann werden das auch andere tun. Bisher war das immer noch verpönt. Dieses ARD-Video war noch AFAIK von Unbekannten in Umlauf gebracht worden. Jetzt werden es alle für alles machen und immer sagen: "Ja, bei uns ist es auch nur Kunst oder Satire. Wir wollen damit nur auf etwas aufmerksam machen und zuspitzen."

syt, to random

“What Tesla is contending is deeply troubling to the Court […] Their position is that because Mr. Musk is famous and might be more of a target for deep fakes, his public statements are immune […] In other words, Mr. Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do.” https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused

netzpolitik_feed, to random German
@netzpolitik_feed@chaos.social avatar

In Spanien berichten zahlreiche Mädchen, dass KI-generierte Nacktfotos von ihnen in der Schule zirkulieren. Der Fall wird jetzt zum nationalen Skandal – weil die Mütter der Betroffenen sich organisiert haben, um gegen die verantwortlichen Mitschüler vorzugehen.

https://netzpolitik.org/2023/deepfakes-in-spanien-gefaelschte-nacktbilder-von-maedchen-sorgen-fuer-aufschrei/

heiseonline, to news German

Spanien: Jugendliche verbreiten KI-generierte Nacktbilder von Mädchen

Von fast zwei Dutzend Mädchen in Almendralejo kursierten KI-generierte Nacktbilder an ihren Schulen. 10 Jungen und eine KI-App sollen verantwortlich sein.

https://www.heise.de/news/Spanien-Jugendliche-verbreiten-KI-generierte-Nacktbilder-von-Maedchen-9314057.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege

heiseonline, to news German

KI in der Politik: AfD zeigt falschen Goldschatz und erklärt es nicht

Die AfD hat Gold geerbt. Auf ihrem Parteitag wird davon angeblich ein Bild gezeigt – doch ein Mann darauf hat sechs Finger. Das Bild ist künstlich erstellt.

https://www.heise.de/news/AfD-zeigt-KI-Goldschatz-und-erklaert-es-nicht-9230703.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege

mfriess, to ai

Effective 27 July changed their terms of service (T&C) whereby without opt-out YOU give them consent to perpetually use your content (video, audio,…) also for “training and tuning of algorithms and models”.
https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/

Check for yourself:
https://explore.zoom.us/en/terms/
Compare with the version archived 25 July by :
https://web.archive.org/web/20230725013414/https://explore.zoom.us/en/terms/

tdverstynen, to ai
@tdverstynen@neuromatch.social avatar

Why shouldn’t it be illegal to make a of any real person without their certified and explicit consent? I can’t think of good arguments against it.

alexproe, to random
@alexproe@mastodon.uno avatar
sebmeineck, to ai German
@sebmeineck@mastodon.social avatar

Habt ihr's auch gelesen, Marco Buschmann fordert ne Kennzeichnung für KI-Bilder?

Nun, der Verkauf des Buschmann-Zitats als "Forderung" durch u.a. die Deutsche Presse-Agentur ist Käse. Die EU hat sich jüngst auf eine KI-Verordnung geeinigt, inklusive Kennzeichnungspflichten für Deepfakes. Da muss man gar nix mehr fordern.

Ein Facepalm für @netzpolitik_feed

https://netzpolitik.org/2024/kennzeichnung-fuer-ki-bilder-buschmann-fordert-was-laengst-beschlossen-wurde/

didleth, to ai Polish
@didleth@mastodon.social avatar

Na fali wydarzeń z ostatniego tygodnia i wątpliwości co niektórych, zasłyszanych tu i ówdzie...

"Dlaczego rozpowszechnianie deepfakeporn kandydatek do sejmu na twitterze to realny problem?"

https://writefreely.pl/didleth/h1-dlaczego-rozpowszechnianie-deepfakeporn-kandydatek-do-sejmu-na-twitterze-to

#X

paul_denton, to random French
@paul_denton@mastodon.social avatar

Bardella fait mine de s’offusquer des deepfake sur Marine Le Pen et Marion Maréchal, évoque une "utilisation malveillante" et veut une "régulation". Ça tombe bien, le Parlement européen vient d’adopter une première loi de régulation sur le sujet. Cinq ans qu’il est euro-député…

informapirata, to eticadigitale Italian
@informapirata@mastodon.uno avatar

«Facebook ha un grave problema nel contenere i #DeepFake di truffe finanziarie che sfruttano i volti di politici italiani e giornalisti famosi.»
«Adesso è il turno di GiorgiaMeloni e di Enrico Mentana col TgLa7 il post è stato già segnalato alle autorità competenti»

Il post di @alexorlowski mostra questi video che passano come messaggi sponsorizzati (il che configura Facebook come non semplicemente una vittima, ma come un complice della truffa...)

@eticadigitale

https://twitter.com/alex_orlowski/status/1780007107275702705

parismarx, (edited ) to tech
@parismarx@mastodon.online avatar

After AI-generated porn of Taylor Swift spread on Twitter, the issue finally got more attention — but it’s much bigger than celebrities.

On #TechWontSaveUs, I spoke to @kattenbarge about how deepfake nudes are wreaking havoc in women’s lives.

https://techwontsave.us/episode/215_deepfake_abuse_is_a_crisis_w_kat_tenbarge

#tech #deepfake #ai #artificialintelligence

stshank, to photography
@stshank@mstdn.social avatar

Can you trust that photo? Dozens of photographers, engineers, and other experts helped me detail the issue. Digital cameras have always invented 2/3 of their data, but AI raises the stakes. I'm not as pessimistic as many, though. Do you trust your friends? https://www.cnet.com/tech/mobile/features/how-close-is-that-photo-to-the-truth-what-to-know-in-the-age-of-ai/

BlumeEvolution, to hongkong German
@BlumeEvolution@sueden.social avatar

Es hat begonnen. Berichte über einen ersten Live--Millionenbetrug.

Ein Angestellter in überwies 23 Millionen Euro an Betrüger, die sich mithilfe von Deepfakes in einer Videokonferenz als seine Vorgesetzten ausgegeben haben. https://www.golem.de/news/ki-deepfake-eines-finanzchefs-ermoeglicht-millionenbetrug-2402-181858.html

glynmoody, to tv
@glynmoody@mastodon.social avatar

Iran-backed hackers interrupt UAE services with news - https://www.theguardian.com/technology/2024/feb/08/iran-backed-hackers-interrupt-uae-tv-streaming-services-with-deepfake-news "Microsoft analysts cite reports saying disruption by group known as Cotton Sandstorm also reached audiences in UK and Canada" soon be new normal

Nonilex, to meta
@Nonilex@masto.ai avatar

#Meta expands #AI labeling policies as 2024 presidential #election nears
New ‘#MadeWithAI’ labels will be applied to posts the company detects have been generated by AI Starting May 2024, Meta will identify sounds, images & videos generated by #ArtificialIntelligence on its #social #networks amid widespread concern that a rise in “#deepfake” posts will mislead #SocialMedia #users.
#moderation #responsibility #tech #oversight #Facebook #Instagram
https://www.washingtonpost.com/technology/2024/04/05/meta-ai-labels/

heiseonline, to random German

Spanien: Jugendliche verbreiten KI-generierte Nacktbilder von Mädchen

Von fast zwei Dutzend Mädchen in Almendralejo kursierten KI-generierte Nacktbilder an ihren Schulen. 10 Jungen und eine KI-App sollen verantwortlich sein.

https://www.heise.de/news/Spanien-Jugendliche-verbreiten-KI-generierte-Nacktbilder-von-Maedchen-9314057.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege

tao, to ai
@tao@mathstodon.xyz avatar

The ability of tools to readily generate highly convincing "" text, audio, images, and (soon) video is, arguably, one of the greatest near-term concerns about this emerging technology. Fundamental to any proposal to address this issue is the ability to accurately distinguish "deepfake" content from "genuine" content. Broadly speaking, there are two sides to this ability:

  • Reducing false positives. That is, reducing the number of times someone mistakes a deepfake for the genuine article. Technologies to do so include watermarking of human and AI content, and digital forensics.

  • Reducing false negatives. That is, reducing the number of times one believes content that is actually genuine content to be a deepfake. There are cryptographic protocols to help achieve this, such as digital signatures and other provenance authentication technology.

Much of the current debate about deepfakes has focused on the first aim (reducing false positives), where the technology is quite weak (AI, by design, is very good at training itself to pass any given metric of inauthenticity, as per Goodhart's law); also, measures to address the first aim often come at the expense of the second. However, the second aim is at least as important, and arguably much more technically and socially feasible, with the adoption of cryptographically secure provenance standards. One such promising standard is the C2PA standard https://c2pa.org/ that is already adopted by several major media and technology companies (though, crucially, social media companies will also need to buy into such a standard and implement it by default to users for it to be truly effective).

tinker, to infosec

So some of you might remember this post (and the subsequent demonstration on national news) of using a voice cloning tool (AI, Audio Deep Fake) by @racheltobac

Link to post: https://infosec.exchange/@racheltobac/110963070495263373

(If you haven't seen it, go watch it. Rachel is amazing.)

I'd never needed to do a similar attack before, but! I was just tasked yesterday with researching it.

Asked some friends for a turn-key solution to clone voices. Got pointed to a website. Signed up for $1 a month (first month... then it goes to $5 a month thereafter).

Pulled some audio of my mark down from a youtube interview (a podcast works great too).

Only needed a minute's worth of audio.

Uploaded it to the website for cloning.

Typed out a quick script for the voice to read.

30 seconds later, I had my cloned audio.

It was so good, that it even included natural voice inflections AND!!! verbal pauses like umm's and uhh's that matched the mark's original presentation. I can't tell the difference between the cloned voice and the original person.

Y'all... voice cloning and audio deep fakes are well past the ease of "script-kiddy" level. Anyone can do it.

msquebanh, (edited ) to chinese
@msquebanh@mastodon.sdf.org avatar

Since 2022, a swarm of and major companies have been offering the service of creating for . With just a few minutes of sample video and $1,000 in costs, brands can a human streamer to work 24/7.
https://www.technologyreview.com/2023/09/19/1079832/chinese-ecommerce-deepfakes-livestream-influencers-ai/amp/

FreakyFwoof, to ai

Want to be put to sleep by a license agreement? Here's a fake version of me reading a portion of the GNU license, set to chilled out music just for fun.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • cubers
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • GTA5RPClips
  • provamag3
  • InstantRegret
  • everett
  • Durango
  • cisconetworking
  • khanakhh
  • ethstaker
  • tester
  • anitta
  • Leos
  • normalnudes
  • modclub
  • megavids
  • lostlight
  • All magazines