deirdresm,
@deirdresm@hachyderm.io avatar

This is the single best explanation (long!) I've read about why LLMs are a con. Great piece from @baldur.

https://softwarecrisis.dev/letters/llmentalist/

Korny,
@Korny@hachyderm.io avatar

@deirdresm
@baldur @GeePawHill
Love this bit:

> A popular response to various government conspiracy theories is that government institutions just aren’t that good at keeping secrets.

> Well, the tech industry just isn’t that good at software. This illusion is, honestly, too clever to have been created intentionally by those making it.

lewiscowles1986,
@lewiscowles1986@phpc.social avatar

@deirdresm @baldur
Can we take a moment to just appreciate just the domain.

I Couldn't care less if the page had human faeces smeared across it, talking about the rapture. softwarecrisis.dev is a very compelling domain name.

😂

I'm not pro LLM; but I also don't want to be blind to the fact that this stochastic parrot, has in some limited cases; answered things that google simply didn't. Made harder by the fact Google is now nuts deep into AI shilling

lienrag,

@lewiscowles1986

Indeed.
LLMs are not a con.
Promoting LLMs as AI is the con.

The problem, of course, is that not many people understand LLMs well enough to make the difference, and that a lot of people try to abuse the confusion for personal profit.

(remember that even the people who coined an promote the acronym SALAMI doesn't deny that Salamis have uses; they just point how stupid it is to consider Salamis intelligent)

@deirdresm @baldur

deirdresm,
@deirdresm@hachyderm.io avatar

@lienrag @lewiscowles1986 @baldur

LLMs are a con in and of themselves because of the lack of meaningful permission in training data sets, plus the fact they can be driven off the rails so easily.

I’ll agree that the concept isn’t inherently a con, just implemented that way.

YetAnotherGeekGuy,
@YetAnotherGeekGuy@hachyderm.io avatar

@lewiscowles1986 @deirdresm @lienrag @baldur
I’d add that if you can’t say WHY you gave the answer you did, it’s not a good answer. Moreover, it can’t create a pattern that will reliably predict the fidelity of future answers.

If one can’t audit the answers, or prove the reasoning, one has NO reason to believe it — or any other answer.

lewiscowles1986,
@lewiscowles1986@phpc.social avatar

@YetAnotherGeekGuy @deirdresm @lienrag @baldur

I Disagree with

> if you can’t say WHY you gave the answer you did, it’s not a good answer.

How much information folks can give on why they chose a particular path to action is; I'd say rarely available.

Would it be nice to have it always available, yes. But it's not and the world runs this way, sort-of fine.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • khanakhh
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • ethstaker
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • megavids
  • GTA5RPClips
  • Durango
  • normalnudes
  • cubers
  • tacticalgear
  • cisconetworking
  • tester
  • modclub
  • provamag3
  • anitta
  • Leos
  • JUstTest
  • lostlight
  • All magazines