GluWu,

LLMs only become increasingly more politically correct. I would assume any LLM that isn’t uncensored to return something about how that’s inappropriate, in whatever way it chooses. None of those things by themselves present any real conflict, but once you introduce topics that have a majority dataset of being contradictory, the llm will struggle. You can think deeply about why topics might contradict themselves, llms can’t. Llms function on reinforced neutral networks, when that network has connections that only strongly route one topic away from the other, connecting the two causes issues.

I haven’t, but if you want, take just that prompt and give it to gpt3.5 and see what it does.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • DreamBathrooms
  • thenastyranch
  • ngwrru68w68
  • magazineikmin
  • khanakhh
  • rosin
  • mdbf
  • Youngstown
  • slotface
  • everett
  • cubers
  • kavyap
  • ethstaker
  • InstantRegret
  • JUstTest
  • Durango
  • normalnudes
  • osvaldo12
  • tacticalgear
  • cisconetworking
  • Leos
  • GTA5RPClips
  • modclub
  • anitta
  • tester
  • megavids
  • provamag3
  • lostlight
  • All magazines