kellogh,
@kellogh@hachyderm.io avatar

@lars this whole thing is super interesting. It sounds like the strange behavior is more in diffusion models (Dall-E) than language models (GPT4). it reminds me of that paper that found that LLMs only learn logic in a single direction, e.g. if none of its training data had “elephant not in a room”, it wouldn’t know what to do and instead pattern match to “elephant in room”

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines