I_Miss_Daniel, (edited )

Already starting to happen a bit.

The AI only fools us into thinking it’s intelligent because it picks the most likely text response based on what it’s read before. But often, the output is confidently wrong as it’s really just a parlor trick.

Now, since it’s starting to ingest more if it’s own output, the definition of ‘what is the most likely response’ has been poisoned a little from ingesting that formerly wrong response.

Add in all the blog spam, the fake but funny reddit answers etc, and the system - which doesn’t actually think - starts to get more and more deranged.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.ml
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • anitta
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • provamag3
  • tester
  • Leos
  • megavids
  • JUstTest
  • All magazines