abucci, to ai
@abucci@buc.ci avatar

so much of the promise of generative AI as it is currently constituted, is driven by rote entitlement.

Very nice analysis by Brian Merchant ( @brianmerchant ) here: https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

He puts into clear terms what had previously been an unarticulated, creeping suspicion I had about . Clearly there are many angles from which to come at what's going on with , but I appreciate this one quite a bit.

leanpub, to gpt
@leanpub@mastodon.social avatar

LLM Prompt Engineering For Developers by Aymen El Amri is on sale on Leanpub! Its suggested price is $29.00; get it for $12.50 with this coupon: https://leanpub.com/sh/YM2oXmFa

auzzy, to ai
@auzzy@bbq.snoot.com avatar

I don't personally think LLMs will ruin everything, but neither do I think they will solve everything. Despite being in the tech world, I've been skeptical of many of the applications in which they've made an appearance in the past 18 months.

This is the cherry on top. You can no longer avoid it if you're a Google user. And even worse, its hallucinations will displace reliable but smaller sources of info.

This sucks.

https://wapo.st/3WDpAi8

to3k, to OpenAI
@to3k@tomaszdunia.pl avatar

We. Are. Doomed.
———
Two GPT-4os interacting and singing

https://www.youtube.com/watch?v=MirzFk_DSiI

stevensanderson, to gpt
@stevensanderson@mstdn.social avatar

The white background logo is the current hex sticker for my TidyDensity #R the others were generated from DALL-E pretty cool

image/png
image/png

chikim, to llm
@chikim@mastodon.social avatar

GPT4O combines audio, image, text. It can analyze actual audio. Also you can interrupt voice. It can pick up emotion from audio. You can also ask different speech with diffetrent style including singing! It can see the image in real time and chat on voice. For example solving equation in real time as you write on paper. This is amazing!

wagesj45, to homeassistant
@wagesj45@mastodon.jordanwages.com avatar
look997, to ArtificialIntelligence Polish
@look997@101010.pl avatar

Było ujęcie, że inteligencja to jest zdolność rozwiązywania problemów.
Ale można też po prostu znać gotowe rozwiązania problemów, po prostu wyuczone, wtedy wystarczy podążać ścieżką według gotowego rozwiązania, wyciągniętą z pamięci.

Jest ujęcie, że inteligencja, to jest dobra kompresja danych, że sztuczna inteligencja to też na tym polega, tak powstaje model. A kompresja, to po prostu odnalezienie analogii, żeby móc coś upakować w mniejszym rozmiarze.
W mózgu też tak to działa.

Jakby to pociągnąć, to wyobraź sobie rozpakowane dane, z bardzo inteligentnego mózgu/modelu. To by była masa gotowych rozwiązań problemów, do każdej możliwej sytuacji.
Po tym rozwiązanie problemu, to już nie inteligentne myślenie, tylko przelecenie po (tych rozpakowanych) w pamięci gotowych rozwiązań, i zastosowanie ich.

Czyli ostatecznie wszystko sprowadziłoby się do zastosowania gotowych rozwiązań problemów, trzeba tylko zestawić problem z rozwiązaniem, i gotowe.

Inteligencja teoretycznie tworzy coś na poczekaniu, ale można spojrzeć na inteligencję w ten sposób, że to i tak jest lecenie sobie gotowymi rozwiązaniami, gotowymi ścieżkami.

Tekchip, to llm
@Tekchip@mastodon.social avatar

I've noticed lots of sites disabling copy/paste I presume in an attempt to prevent scraping to feed LLM training.

The fediverse doesn't seem to be doing this.

Does that mean the fediverse could have an outsized impact on LLM results as the places for LLM scrapers dwindles?

Is there a sort of fedi-consensus on if we want to help or hurt LLMs? How might we go about doing one or the other if we have more influence?

mgorny, to ai Polish
@mgorny@pol.social avatar

Całkiem dobry komentarz nt. generowania kodu przy pomocy :

"""
Po pierwsze, pisanie kodu jest zdecydowanie łatwiejsze niż jego przegląd. Z pomocą można zautomatyzować tę łatwiejszą część pracy, jednocześnie czyniąc tę drugą jeszcze trudniejszą, bo nie będzie nikogo, kogo będzie można zapytać, dlaczego zdecydowano się zrobić X w taki sposób, dwa miesiące wcześniej. Dokumentacji albo nie będzie wcale, albo będzie niezrozumiała, albo wprost błędna.
""" (tłum. moje)

https://forums.theregister.com/forum/all/2024/04/16/gentoo_linux_ai_ban/#c_4845423

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

mrspaghetti, to Transformers
@mrspaghetti@fediscience.org avatar

Nice intro to and models, even if you're not in the field: https://youtu.be/wjZofJX0v4M?si=C6rp9jjwP22TOoCN
As usual, great visuals to support intuitions for token embedding & attention mechanisms, by the one and only Grant Sanderson a.k.a !

wagesj45, to Transformers
@wagesj45@mastodon.jordanwages.com avatar
tayledras, to ai
@tayledras@mastodon.social avatar

I completely agree with this.

I've seen several respected luminaries argue that LLM is not "true AI" or "Strong AI" since it's based on large learning sets and predictive behavior. They argue that humans and animals are not taught on such large language models or data sets.

What are education and experience, if not Large Learning Models based on the teaching of schools, universities, and books?

https://www.freethink.com/robots-ai/embodied-ai

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

johnpettigrew, to ai
@johnpettigrew@wandering.shop avatar

For those of you who use LLMs to help you code, here's a warning: these tools have been shown to hallucinate packages in a way that allows an attacker to poison your application. https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

fingers, to ai
@fingers@mastodon.social avatar
heysannidhi, to OpenAI
@heysannidhi@c.im avatar
wagesj45, to llm
@wagesj45@mastodon.jordanwages.com avatar

Could an with the same encoder/token scheme as a image model be used to increase the diffuser model's understanding of concepts? The LLM is going to have greater understanding as a whole than the limited connections that the diffuser is going to pick up on if trained only from captions.

my_actual_brain, to llm
@my_actual_brain@fosstodon.org avatar

I’m surprised at the performance an #llm can run at on my 8700-cpu.

It’s a bit slow and I don’t think it’s worth getting a gpu just to make it run at a faster speed, but maybe in time I’ll reconsider it.

If I were going to get a #gpu, what would you recommend?

#gpt #ollama #llama2

BenjaminHCCarr, to OpenAI
@BenjaminHCCarr@hachyderm.io avatar

’s Is ’s Dream Tool. Tests Show There’s
Asked to rank 1,000 times, GPT favored names from some more than others, it would fail benchmarks used to assess against protected groups.This simple workflow, isolated names as source of in GPT that could affect . Interviews and experiment show using for /hiring pose serious risk: automated at scale
https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcwOTg1NjE0OCwiZXhwIjoxNzEwNDYwOTQ4LCJhcnRpY2xlSWQiOiJTQTA1Q1FUMEFGQjQwMCIsImJjb25uZWN0SWQiOiI2NDU1MEM3NkRFMkU0QkM1OEI0OTI5QjBDQkIzRDlCRCJ9.MdkSGC3HMwwUYtltWq6WxWg3vULNeCTJcjacB-DNi8k

inthehands, to ai
@inthehands@hachyderm.io avatar

The experience @cocoaphony describes here: https://mastodon.social/@cocoaphony/112084068085219012

…sounds like where I expect LLM usage in real-world software development to actually land:

  • Augmenting humans, not replacing

  • Doesn’t •do• thinking; •prompts• thinking

  • Plays to LLM strength: discovering and repeating patterns

  • Useful with an unfamiliar tool / context

  • Supplements experimentation, docs, and community

1/2

ianbicking, to gpt
@ianbicking@hachyderm.io avatar

Usually this prompt I use for worldbuilding comes up with things like "steampunk" or "bronze age" but I like where it's going with this series:

Shell Age: Crab Civilization
Tide Era: Seashell Society
Coral Epoch: Mollusk Council
Pearl Dynasty: Aquatic Influence
Oceanic Era: Shell Governance
Shelltopia: Crustacean Rule
Seashell Utopia: Hermit Advisory
Nautical Renaissance: Shellfolk Council
Crustacean Empire: Reef Rulership

RememberUsAlways, to gpt
@RememberUsAlways@newsie.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • thenastyranch
  • ethstaker
  • DreamBathrooms
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • Youngstown
  • everett
  • mdbf
  • slotface
  • ngwrru68w68
  • kavyap
  • provamag3
  • Durango
  • InstantRegret
  • GTA5RPClips
  • tester
  • cubers
  • cisconetworking
  • normalnudes
  • khanakhh
  • modclub
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines