> A popular response to various government conspiracy theories is that government institutions just aren’t that good at keeping secrets.
> Well, the tech industry just isn’t that good at software. This illusion is, honestly, too clever to have been created intentionally by those making it.
@deirdresm@baldur
Can we take a moment to just appreciate just the domain.
I Couldn't care less if the page had human faeces smeared across it, talking about the rapture. softwarecrisis.dev is a very compelling domain name.
😂
I'm not pro LLM; but I also don't want to be blind to the fact that this stochastic parrot, has in some limited cases; answered things that google simply didn't. Made harder by the fact Google is now nuts deep into AI shilling
Indeed.
LLMs are not a con.
Promoting LLMs as AI is the con.
The problem, of course, is that not many people understand LLMs well enough to make the difference, and that a lot of people try to abuse the confusion for personal profit.
(remember that even the people who coined an promote the acronym SALAMI doesn't deny that Salamis have uses; they just point how stupid it is to consider Salamis intelligent)
LLMs are a con in and of themselves because of the lack of meaningful permission in training data sets, plus the fact they can be driven off the rails so easily.
I’ll agree that the concept isn’t inherently a con, just implemented that way.
@lewiscowles1986@deirdresm@lienrag@baldur
I’d add that if you can’t say WHY you gave the answer you did, it’s not a good answer. Moreover, it can’t create a pattern that will reliably predict the fidelity of future answers.
If one can’t audit the answers, or prove the reasoning, one has NO reason to believe it — or any other answer.
Add comment