malwaretech,

A while back I wrote a couple of blog posts about how I think LLMs will be a net negative (at least in the near term), due to the extreme overhyping giving people unrealistic expectations of their ability. The direct result will be an overall degrade in internet usability as people begin flooding platforms with low quality spam that they erroneously believe to be high quality. Previously it was fairly easy to spot someone who doesn't know what they're talking about, but now LLMs enable them to word things convincingly enough to waste the time of even domain experts.

Even now I'm still often surprised by all the creative ways that people are finding to waste other's time. I just saw this post from one of the curl maintainers reporting that they've been receiving nonsense bug bounty reports based on LLM hallucinations, which I imagine is likely due to people trying to automate bug hunting despite lacking the understanding to confirm their finding. They reported that in one case the submission was convincing enough that they went over the code 3 times before coming to the conclusion that no bug existed and the report was likely AI generate.

https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • everett
  • osvaldo12
  • magazineikmin
  • thenastyranch
  • rosin
  • tester
  • Youngstown
  • Durango
  • slotface
  • ngwrru68w68
  • kavyap
  • mdbf
  • InstantRegret
  • JUstTest
  • ethstaker
  • GTA5RPClips
  • tacticalgear
  • normalnudes
  • Leos
  • modclub
  • khanakhh
  • cubers
  • cisconetworking
  • anitta
  • provamag3
  • megavids
  • lostlight
  • All magazines