WordsByWesInk,

@albertcardona @bahome Too often though it does change meaning — sometimes subtly — by substituting not-quite-synonyms and changing weaker but appropriate claims into stronger but false ones. I saw that in the datasets for the AI-detector bias paper (arXiv:2304.02819) and in my own informal tests. It seems to me that fluency requires understanding meaning, which LLMs can't do.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • science
  • DreamBathrooms
  • InstantRegret
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • rosin
  • everett
  • Youngstown
  • khanakhh
  • slotface
  • ngwrru68w68
  • kavyap
  • Leos
  • thenastyranch
  • JUstTest
  • mdbf
  • GTA5RPClips
  • cubers
  • cisconetworking
  • Durango
  • tester
  • provamag3
  • ethstaker
  • modclub
  • anitta
  • megavids
  • normalnudes
  • lostlight
  • All magazines