doctorn, avatar

*The world 3 months ago:*AI is growing exponentially and might take over the world soon. It can do everything you can, but better, and some even seem almost centient.

*The world today:*Turns out the large language model made to fool us tried to fool us by ‘unexpectedly’ exhibiting behavior it was made for.



With skills like that, they really could put researchers out of a job. They take a lot longer to fake data sets.


LLM’s are basically just really good bullshit generators (telling you what you want to hear).

Turns out that’s part of the job description of tech support agents, and some low brow art.

For all the other jobs people claimed AI could replace, bullshitting is antithetical to the job description


I don’t understand why this is surprising or even unexpected. LLMs are not intelligent. They do not actually “know” anything. They are simply programmed with grammar rules and what words in what order tend to make people happy. Of course it’s going to make stuff up because it has no concept of what is real or what is made up. Those concepts don’t mean anything to it. It puts words into vaguely grammatically correct sentences. That’s it.

I’m already so tired of hearing about these things. If LLMs actually had any amazing capabilities or could change the world they wouldn’t be being sold to the public. This is all just a marketing blitz for what will probably end up like cryptocurrencies, a niche thing that does one specific thing very well but otherwise not generally useful.


I think the issue with this is that peer reviewers at academic journals are just regular researchers at regular institutions who are volunteers/voluntold to review things. They don't do forensic examinations of the raw datasets that come across their desk because they're not getting paid to review in the first place, and forensic data examination is a specialized skill anyway. So if the bullshit engines known as LLMs are just convincing enough and can generate supporting data that's just good enough to pass a peer reviewer's smell test, that's going to be a big problem for the whole publishing process worldwide.

I would say that academic journals are going to have to hire data scientists to verify that datasets are genuine before they're sent to reviewers, but that would require them actually spending money to do any work instead of just doing nothing while extorting researchers for billions of dollars in free money, so that's never going to happen.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • science
  • DreamBathrooms
  • cubers
  • mdbf
  • osvaldo12
  • Youngstown
  • slotface
  • tacticalgear
  • everett
  • thenastyranch
  • rosin
  • kavyap
  • ethstaker
  • khanakhh
  • magazineikmin
  • provamag4
  • cisconetworking
  • Leos
  • anitta
  • GTA5RPClips
  • InstantRegret
  • tester
  • Durango
  • normalnudes
  • provamag3
  • modclub
  • JUstTest
  • lostlight
  • relationshipadvice
  • All magazines