TechDesk, to tech
@TechDesk@flipboard.social avatar

EyeEm, a photo-sharing community once thought to be a potential challenger to Instagram, is now licensing its users’ photos to train AI models, Tech Crunch reports. The company gave users 30 days to opt out. As you can imagine, many of them are upset. Read more: https://flip.it/Yv2xV3
#Tech #Technology #AI #ArtificialIntelligence #Photography

rhys, to llm
@rhys@rhys.wtf avatar

My first troublesome hallucination with a in a while: (200k context) insisting that I can configure my existing keys to work with PKINIT with and helping me for a couple of hours to try to do so — before realising that GPG keys aren't supported for this use case. Whoops.

No real bother other than some wasted time, but a bit painful and disappointing.

Now to start looking at PIV instead.

gimulnautti, to ai
@gimulnautti@mastodon.green avatar

Humanity will always strive to create god.

We do it because lets us not accept responsibility of our own actions ourselves.

Any kind will do, as long as we don’t have to look in the hard mirror of ourselves directly.

Today, technology is the canvas we’re painting our need of a god on.

Proud atheists happily building a god for themselves. What a sight to behold!

br00t4c, to ai
@br00t4c@mastodon.social avatar

Athletic director used AI to frame principal with racist remarks in fake audio clip, police say

https://thegrio.com/2024/04/27/pikesville-high-school-principal-framed-fake-ai-clip/

AAKL, to ai
@AAKL@noc.social avatar

This is how should be applied, not to harvest data and hound internet users with nonsense.

AI just helped us find out where Plato is buried https://www.zdnet.com/article/ai-just-helped-us-find-out-where-plato-is-buried-heres-how/

AAKL, to privacy
@AAKL@noc.social avatar

You need to know this, if you missed it yesterday.

Photo-sharing community EyeEm will license users’ photos to train if they don’t delete them https://techcrunch.com/2024/04/26/photo-sharing-community-eyeem-will-license-users-photos-to-train-ai-if-they-dont-delete-them/ @TechCrunch @Sarahp

br00t4c, to ai
@br00t4c@mastodon.social avatar
davidbisset, to web
@davidbisset@phpc.social avatar

"The is not dying."

https://manuelmoreale.com/the-web-is-not-dying

"The only thing that’s slowly dying is the spirit, the mindset that made the web such a wonderful place to begin with."

AAKL, to Windows11
@AAKL@noc.social avatar
Crell, to ai
@Crell@phpc.social avatar

I reached out to about their code assistant.

They verified:

  1. They use , which means my GitHub OSS has almost certainly been used in training data.
  2. They rely on OpenAI's promise to not ingest any code that is used for "context".
  3. They specifically do not disclaim that their tool could result in me violating someone else's copyright, and they could suggest the same code to someone else, too.

Uninstall this crap, now. It's dangerous and irresponsible

AAKL, to ai
@AAKL@noc.social avatar

Unless someone can find the related survey, this is an example of what you shouldn't do.

This story has been making the rounds. It's based on a CyberArk survey. And yet, not a single one of these posts link to the research. After much poking, I found a survey reference dating back to June, 2023 🤔. Unfortunately, CyberArk's account here is not active.

Growing concern that artificial intelligence could pose new cybersecurity threats
https://www.abcactionnews.com/science-and-tech/artificial-intelligence/growing-concern-that-artificial-intelligence-could-pose-new-cybersecurity-threats @ABC

david, to ai
@david@boles.xyz avatar
ianRobinson, to Podcast
@ianRobinson@mastodon.social avatar

Listening to The Ezra Klein Show (What if Dario Amodei Is Right About A.I.?): https://nytimes.com/2024/04/12/opinion/ezra-klein-podcast-dario-amodei.html

Back in 2018, Dario Amodei worked at OpenAI. And looking at one of its first A.I. models, he wondered: What would happen as you fed an artificial intelligence more and more data? He and his colleagues decided to study it, and they found that the A.I. didn’t just get better with more data; it got better exponentially.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

weirdwriter, to ai

Google Made Me Ruin A Perfectly Good Website: A Case Study On The AI-Generated Internet https://theluddite.org/#!post/google-ads

ErikJonker, to ai Dutch
@ErikJonker@mastodon.social avatar

Relevant blog, we are not ready for lethal autonomous weapons. Which is a step further then current automatic guided weapons. "Will Fearless and Tireless Robots Lead to More Terrifying Wars?"
#ai #war #autonomousweapons #geopolitics #robots @geopolitics
https://warontherocks.com/2024/04/will-fearless-and-tireless-robots-lead-to-more-terrifying-wars/?s=09

br00t4c, to ai
@br00t4c@mastodon.social avatar
br00t4c, to ai
@br00t4c@mastodon.social avatar
ianRobinson, to Podcast
@ianRobinson@mastodon.social avatar

Listening to The Ezra Klein Show (Will A.I. Break the Internet? Or Save It?): https://nytimes.com/2024/04/05/opinion/ezra-klein-podcast-nilay-patel.html

The internet is in decay [due to AI]. According to my guest today, Nilay Patel, this isn’t just a blip, as the big platforms figure out how to manage this. He believes that A.I. content will break the internet as we know it.

#Podcast #LLM #AI

Snowshadow, to news
@Snowshadow@mastodon.social avatar

Ok those of you who defend AI please tell me how this will help humanity!!

Big Brother is here!! I have been telling people don't post selfies!!

"AI detects individual’s political orientation accurately, a threat?

Study showed new threat in the digital age–AI’s ability to predict political orientation from even naturalistic images of individuals."

https://interestingengineering.com/culture/ai-detects-individuals-political-orientation-accurately-a-threat

gimulnautti, (edited )
@gimulnautti@mastodon.green avatar

@Snowshadow I think painting it off as a binary is the first mistake one can make on the path of criticising

AI does not have a mind of it’s own. Even if Silicon Valley tech bros are adamant about building that god for themselves, what you’re looking at are tools wielded by humans.

Humans make moral choices that can be challenged. AI emulates moral choices, but it cannot be challenged as its not conscious.

Bots should not be suspect to liberty or freedom of speech either.

gimulnautti,
@gimulnautti@mastodon.green avatar
drahardja, to ai
@drahardja@sfba.social avatar

Here’s a nice overview of all the anti- tools available to visual artists today.

https://www.theverge.com/24063327/ai-art-protect-images-copyright-generators

hgrsd, to github
@hgrsd@hachyderm.io avatar

I've been trialling GitHub Copilot recently at work and, having been generally skeptical of the golden mountains promised by AI hype guys, I have to say that it gave me a modest efficiency gain in some scenarios. I would miss not having it, much like I would miss not having autocomplete.

I'll probably write up a blog for hgrsd.nl with a few thoughts of where it was helpful for me.

#github #llm #ai #software

ralf, to infosec
@ralf@fedi.jrlenz.net avatar

Someone explain to me why this is not a horribly fucking stupid idea.

My spidey sense is going haywire.

https://wicg.github.io/file-system-access/

ralf,
@ralf@fedi.jrlenz.net avatar

@adamsdesk

It gets worse. Think of it in the context of MS pushing Copilot harder than ever before with Win11 24H2. Think of it in the context of every Big Tech and major application trying to jump on the bandwagon without any legislative guardrails or training wheels.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • Backrooms
  • khanakhh
  • thenastyranch
  • Youngstown
  • slotface
  • rosin
  • mdbf
  • everett
  • kavyap
  • InstantRegret
  • GTA5RPClips
  • DreamBathrooms
  • magazineikmin
  • anitta
  • cubers
  • ethstaker
  • cisconetworking
  • ngwrru68w68
  • osvaldo12
  • Durango
  • modclub
  • hgfsjryuu7
  • normalnudes
  • tester
  • tacticalgear
  • Leos
  • JUstTest
  • All magazines