PavelASamsonov, to random
@PavelASamsonov@mastodon.social avatar

The true power of is not technological, but rhetorical: almost all conversations about it are about what executives are saying it will do "one day" or "soon" rather than what we actually see (and of course no mention of business model which doesn't exist).

We are told to simultaneously believe AI is so "early days" as to excuse any lack of real usefulness, and that it is so established - even "too big to fail" - that we are not permitted to imagine a future without it.

mikarv, to Futurology
@mikarv@someone.elses.computer avatar

Meta's 2 license has an unusual clause whereby they withdraw your right to use the model if you allege has breached your own IP rights by training their stuff on your intellectual property.

afeinman, to random
@afeinman@wandering.shop avatar

HOW TO SPOT A DEEP FAKE:

  1. You can't.

Don't think you can. You can spot clumsy ones, but you've already missed a dozen others. We're past the stage where even expert practitioners can have a 100% success rate.

Instead, think about how to avoid taking action, or trusting someone, because of who they seem to be. Holding onto the fantasy that "I can spot 'em!" is harmful, and moves the onus of responsibility from collective to personal.

This is also true for , of course.

matthewskelton, to llm
@matthewskelton@mastodon.social avatar

"the real-world use case for large language models is overwhelmingly to generate content for spamming"

Excellent article by Amy Castor

https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

#AI #GenAI #LLM #LLMs #OpenAI #ChatGPT #GPT #GPT4 #Sora #Gemini

filipw, to ai
@filipw@mathstodon.xyz avatar

great article - AI Prompt Engineering Is Dead.

sounds like the much heralded job of the future, "prompt engineer" is no longer needed 😅

"Battle and his collaborators found that in almost every case, this automatically [AI generated] generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching."

🔗 https://spectrum.ieee.org/prompt-engineering-is-dead

timbray, to cryptocurrency
@timbray@cosocial.ca avatar

It’s nauseating that the hyperscalers are crankin’ the carbon to inflate the AI bubble like there’s no tomorrow (which there won’t be, for my children, if we don’t cut back) but hey, don’t forget that Bitcoin is still in the running for the single most dangerous-to-the-planet use of computers.

https://www.theverge.com/2024/5/15/24157496/microsoft-ai-carbon-footprint-greenhouse-gas-emissions-grow-climate-pledge

#cryptocurrency #genai

abucci, to midjourney
@abucci@buc.ci avatar

Nightshade 1.0 is out: https://nightshade.cs.uchicago.edu/index.html

From their "What is Nightshade?" page:

Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.

In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.

-E

smach, to ai
@smach@fosstodon.org avatar

Generative AI bias can be substantially worse than in society at large. One example: “Women made up a tiny fraction of the images generated for the keyword ‘judge’ — about 3% — when in reality 34% of US judges are women . . . .In the Stable Diffusion results, women were not only underrepresented in high-paying occupations, they were also overrepresented in low-paying ones.”

https://www.bloomberg.com/graphics/2023-generative-ai-bias/

tante, (edited ) to ai
@tante@tldr.nettime.org avatar

The growing backlash against AI

While the crowd at sxsw2024 booing a sizzle reel of people either promising the beauty of the future "AI" will bring or claiming it to be "without alternative" is funny and went viral for all the right reasons, this event speaks to a deeper shift in perception.

https://tante.cc/2024/03/18/5115/

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

horovits, to ai
@horovits@fosstodon.org avatar

took out the fun part of , the creation, leaving us to debug and test auto generated code. Not fun 😕

And it seems our software has also become worse since the era.

@kevlin keynote at sharing developer research and thoughts.

CatherineFlick, to LLMs
@CatherineFlick@mastodon.me.uk avatar

Just FYI, if you have older parents or other family members, set up some sort of shibboleth with them so they know what to ask you if you ever call them asking for something. These new generative models are going to be extremely convincing, and the idiots in charge of these companies think they can use guardrails to stop it being used inappropriately. They can't.

ppatel, to LLMs
@ppatel@mstdn.social avatar

One wonders how effective translations are when done by since the corpus of material used to train languages is this crap. Do we have a
problem?

Research Suggests A Large Proportion Of Web Material In Languages Other Than English Is Machine Translations Of Poor Quality Texts.

https://www.techdirt.com/2024/01/29/research-suggests-a-large-proportion-of-web-material-in-languages-other-than-english-is-machine-translations-of-poor-quality-texts/

jon, to random
@jon@henshaw.social avatar

Tell me you used without telling me you used generative AI.

fenneladon, to random
@fenneladon@todon.eu avatar

Every app and site should have a "never show me synthetic content" default and option. Instead every app and site is trying to force us to prioritise their low quality unreliable literal-in-the-philosophical-sense-bullshit synthetic content. As a product manager, I'm beyond embarrassed for everyone involved in these actively user-, society-, and environment- hostile choices. 🙄

ppatel, to opensource
@ppatel@mstdn.social avatar

I expected something like this after Apple's October AI effort. The potential implications are pretty significant here.

Apple partners with University of California researchers to release open-source model , which can edit images based on natural language instructions

Apple releases ‘MGIE’, a revolutionary AI model for instruction-based image editing

https://venturebeat.com/ai/apple-releases-mgie-a-revolutionary-ai-model-for-instruction-based-image-editing/

mikarv, to random
@mikarv@someone.elses.computer avatar

While firms are being faced with criticism for their models containing detailed personal data about individuals, and thinking of how to mitigate this, UK intelligence agencies (e.g. ) are seeking powers to effectively lower their oversight in building models precisely to be able to draw unstructured data into personal data about people, claiming that AI firms can do it (whether or not legally!) but they can't https://www.theguardian.com/technology/2023/aug/01/uk-intelligence-spy-agencies-relax-burdensome-laws-ai-data-bpds?CMP=share_btn_tw

shortridge, to Cybersecurity
@shortridge@hachyderm.io avatar

The 2024 Verizon Data Breach Investigations Report (#DBIR) is out this morning, and I make sense of it in my new post: https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2024/

I focused on what felt like the most notable points, from #ransomware to MOVEit to web app pwnage to #GenAI and more.

I have insights, quibbles, and hot takes as always — but the fact remains it’s our best source of empirical data on cyberattack impacts. If you’re a #cybersecurity vendor, please consider contributing data to it.

openfuture, to ai
@openfuture@eupolicy.social avatar

"AI reduces people's motivation to share works openly." - What do you think? Join our Alignment Assembly on #AI & the Commons to vote on statements like this one and add your own. Register here 👉 https://openfuture.eu/blog/alignment-assembly-on-ai-and-the-commons/ #DigitalCommons #genAI

abucci, to ai
@abucci@buc.ci avatar

Among the many reasons we should resist the widespread application of generative an important, if less concrete, one is to preserve the freedom to change. This class of method crystallizes the past and present and re-generates it over and over again. The net result, if it's used en masse, is foreclosing the future.

If you're stats-poisoned: human flourishing requires the joint distribution of the future to be different from that of the past and present. We, collectively, form a non-stationary system, and forcing the human system to be stationary is a kind of violence.

Ruth_Mottram, to ai
@Ruth_Mottram@fediscience.org avatar

Youngest has got into bordrollespil (=> what we used to call D&D I guess?) at after school club.

Apparently the kids tried to get to make pictures for their characters but the elves all came out as "half-naked ladies" and the animals in the background were all "creepy looking. One had an actual clown face"

They ended up drawing their own.

Tell me again how is going to change the world?

judeswae, to OpenAI
@judeswae@toot.thoughtworks.com avatar

"I believe that artificial intelligence has three quarters to prove itself before the apocalypse comes, and when it does, it will be that much worse, savaging the revenues of the biggest companies in tech.", predicts Ed Zitron.

https://www.wheresyoured.at/peakai/

ceedee666, to OpenAI German
@ceedee666@mastodon.social avatar

@noybeu sues for spreading false information.

https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

This is going to be interesting as it’s about the very foundation of .

jpoesen, to firefox
@jpoesen@drupal.community avatar

https://humanaigc.github.io/emote-portrait-alive

However technically amazing generated video may be, it's going to become nearly impossible to separate fiction from fact.

We absolutely need #Firefox to develop and integrate real-time #GenAI detection of all audiovisual materials.

Because what other browser is going to bother?

(via @maeool / @greg_harvey )

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • Durango
  • Youngstown
  • everett
  • slotface
  • rosin
  • cubers
  • mdbf
  • ngwrru68w68
  • anitta
  • GTA5RPClips
  • cisconetworking
  • osvaldo12
  • ethstaker
  • Leos
  • khanakhh
  • normalnudes
  • tester
  • modclub
  • tacticalgear
  • megavids
  • provamag3
  • lostlight
  • All magazines