mdiluz, (edited )
@mdiluz@mastodon.gamedev.place avatar

With all these stupid Google AI screenshots... The problem isn't so much that LLMs aren't really AI, or that they're not accurate. The problem is the tech companies trying to line their pockets by selling you the idea that LLMs are AI and can be accurate and trusted to answer questions.

It's a tool specifically designed to produce output that convinces you it is exactly what it was asked to be. It's fooling tech giants and they're using it to fool you.

Don't fall for it.

mdiluz,
@mdiluz@mastodon.gamedev.place avatar

I just can't get over how we literally designed a deceit engine - something that brute forces coming up with the most believable, plausibly human-like response to a given input, but has zero knowledge or context of what a correct or factual answer is - and are now marketing it as intelligent.

mdiluz,
@mdiluz@mastodon.gamedev.place avatar

When it's used as a tool where the user understands what it is, it can be powerful. "How could I rephrase this paragraph?", "What API library might be used to reverse the flange polarity?" or "Can you find what seems wrong in this data?" are uses that are valid because you as a user can validate the output. You don't know the answer, but know what the right one would look like. "Is this mushroom edible?" is not, because the verification step would also be external.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • InstantRegret
  • Durango
  • provamag3
  • everett
  • cisconetworking
  • Leos
  • normalnudes
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • megavids
  • anitta
  • tester
  • JUstTest
  • lostlight
  • All magazines