benroyce, to ai
@benroyce@mastodon.social avatar

this is not

this is not a scene from a movie

this is simply a one in a billion video shot at the right time at the right angle by a teenager in a few days ago, may 18/ 19

fucking amazing! positively biblical

experts say it was a fragment, a few feet wide

https://www.news.com.au/technology/science/space/blue-meteor-falls-through-the-sky-over-spain-and-portugal/news-story/7bd39d794d6a5e79feade5723e4d4787

amazing meteor video

goodbye,
@goodbye@mastodon.nl avatar

@benroyce how do you know it is real? Where you there? 🤔

pylapp, to ai
@pylapp@framapiaf.org avatar

Some parts of #AI domain start #privacy nightmares.

Some parts of AI like #GenAi are often based on stolen copyrighted, #opensource and #FLOSS projects, and personal materials.

Big Tech companies can provide hazardous softwares, helping creating #deepfakes.

#Microsoft with #Copilot and #Recall, #OpenAI with #ChatGPT are these ones.
Their softwares are a danger for privacy, copyrights and users.

Get rid of that shit!
Get rid of all this AI-bullshitted-marketing useless services!

TheConversationUS, to technology
@TheConversationUS@newsie.social avatar

Although companies have created detectors to help spot , studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.

A team of researchers discovered new methods that improve both the fairness and the accuracy of these detection algorithms by teaching them about human diversity

https://theconversation.com/deepfake-detection-improves-when-using-algorithms-that-are-more-aware-of-demographic-diversity-226061
@blackmastodon

thejapantimes, to worldnews
@thejapantimes@mastodon.social avatar

An increasing number of foreign actors beyond Russia and China are seeking to influence U.S. elections, officials told a Senate committee. https://www.japantimes.co.jp/news/2024/05/16/world/politics/foreign-attempts-us-elections/

cs, to random
@cs@mastodon.sdf.org avatar
nixCraft, to random
@nixCraft@mastodon.social avatar

Many people canceled their Openai subscriptions, or it is tough to monetize stuff created with generative AI, I guess, so Sama comes with a new plan to use all those GPUs. They are now going after OF models. WTF OpenAI? They are going to allow deepfake? This company is beyond evil 👿

mjgardner,

@nixCraft @yzh has since denied they’re getting into -generated or :

• Gizmodo: “OpenAI Says It's Not Making AI Porn”: https://Gizmodo.com/openai-says-its-not-making-ai-porn-1851467771

• Quartz: “OpenAI is staying away from ”: https://Apple.news/A2qVv9XrOSUmCRI_eT8yI8A

ericfreyss, to random French
@ericfreyss@mastodon.social avatar

Cette semaine je vous recommande l'écoute du Podcast "Smashing Security" @smashingsecurity par @gcluley et Carole Theriault qui nous parlent notamment cette semaine des dans le cadre des élections en (à partir de 14mn40s)
https://player.captivate.fm/episode/0a1b002e-d858-4c94-a823-9a2f897711a7?t=880

mediabiasfactcheck, to random
@mediabiasfactcheck@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "AI propaganda is here. But is it persuasive? Recent research published in PNAS Nexus and conducted by Tomz, Josh Goldstein from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student Jason Chao, research scholar Shelby Grossman, and lecturer Alex Stamos—examined the effectiveness of AI-generated propaganda.

They found, in short, that it works."

https://hai.stanford.edu/news/disinformation-machine-how-susceptible-are-we-ai-propaganda?utm_source=pocket_saves

thejapantimes, to worldwithoutus
@thejapantimes@mastodon.social avatar

As India's election intensifies, manipulated videos involving top aides of Prime Minister Narendra Modi have sparked police investigations and led to the arrest of several Congress party workers. https://www.japantimes.co.jp/news/2024/05/06/asia-pacific/politics/fake-videos-modi-india-election/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines