savvykenya,
@savvykenya@famichiki.jp avatar

If you have documents with the answers you're looking for, why not search the documents directly? Why are you embedding the documents then using (Retrieval Augmenter Generation) to make a large language model give you answers? An LLM generates text, it doesn't search a DB to give you results. So just search the damn DB directly, we already have great search algorithms with O(1) retrieval speeds! are so stupid.

gimulnautti,
@gimulnautti@mastodon.green avatar

@savvykenya Most jobs are fine with mediocre answers. If a new employee needs to know something and doesn’t know where to look, that’s a pretty good use-case for RAG.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • LLMs
  • DreamBathrooms
  • thenastyranch
  • mdbf
  • Durango
  • Youngstown
  • slotface
  • hgfsjryuu7
  • vwfavf
  • rosin
  • kavyap
  • osvaldo12
  • PowerRangers
  • InstantRegret
  • magazineikmin
  • normalnudes
  • khanakhh
  • GTA5RPClips
  • ethstaker
  • cubers
  • ngwrru68w68
  • tacticalgear
  • everett
  • tester
  • Leos
  • cisconetworking
  • modclub
  • anitta
  • provamag3
  • All magazines