gregeganSF,
@gregeganSF@mathstodon.xyz avatar

The idea that you can take an LLM — trained on an undisclosed dataset of all kinds of text, and tweaked by human feedback for various messy goals — and then use its responses for social science “research” rather than using real people, is beyond stupid.

https://www.science.org/content/article/can-ai-chatbots-replace-human-subjects-behavioral-experiments

gregeganSF,
@gregeganSF@mathstodon.xyz avatar

Yes, it would be great to have your career possibilities determined by some fuckwit who told an LLM to pretend to be like you — according to the stereotypes it had been trained on — and then found that the LLM did poorly at a task you might otherwise be hired to do.

AlgoCompSynth,
@AlgoCompSynth@ravenation.club avatar

@gregeganSF Pseudo-science at its capital-intensive electric=power-guzzling worst. 😡

crispytoats,

@gregeganSF the only place the category metadata could come from is the psychology journals defining the "psycholinguistic features" in the first place. classic cyclic mistake.

helge,
@helge@mas.to avatar

@gregeganSF Even worse: An LLM pretending to be an affluent westerner pretending to be you.

"Language model training data are biased toward Western, affluent people, so the consumer survey might be similarly skewed. But Israeli imagines prompting the AI to impersonate a range of consumers—or to zoom in on a particular demographic—in order to create a more representative study"

Man, they even wrote out in plain text how useless this was without realizing it. This is impressively stupid.

dpwiz,
@dpwiz@qoto.org avatar

@gregeganSF You shouldn’t be applying to a place which uses fuckwit practices anyway. The problem is self-solving.

lord_tacitus,
@lord_tacitus@mastodon.social avatar

@gregeganSF

Ah yes, the "making shit up" model of research.

mdhughes,

@gregeganSF "I named a bunch of Minecraft villagers after employees, and then evaluated their job performance to decide who to fire. Walter kept walking into the cactus farm and died, so we put the real one in a straightjacket for safety."

inthehands,
@inthehands@hachyderm.io avatar
nickzoic,
@nickzoic@aus.social avatar

@gregeganSF Computer Scientists Stick To Computer Science Challenge.

theLastTheorist,

@gregeganSF You get to skip the IRB, get the data you want (even if it doesn't exist or you don't/shouldn't have access to it), and confirm your hypotheses about novel (maybe even impossible?) claims. I can imagine exactly the kind of researcher who's going to love this.

whorfin,
@whorfin@mastodon.social avatar

@gregeganSF
“not even wrong”

norado,

@gregeganSF
step 1. create a feedback loop
step 1. create a feedback loop
step 1. create a feedback loop
step 1. create a f...

Elizabeth3,
@Elizabeth3@toot.community avatar

@gregeganSF it could be interesting research if it’s contextualized properly. It doesn’t tell us about people (I agree with you there) but to the degree that the 20 million sources it has been trained on are representative of some kind of practical knowledge base, it does tell us something. I find it interesting that it predicts the most likely content for a prompt based on very large datasets. I do share your concerns with validity given we don’t know the algorithms.

TonyaCanning,
@TonyaCanning@mstdn.social avatar

@gregeganSF 1. Step one - create LLM
2. Step two - do research on LLM
3. Step three - be amazed when the LLM you designed confirms all of your hypotheses

scottmatter,
@scottmatter@aus.social avatar

@gregeganSF

“Costly and cumbersome” are probably the least reasonable objections to doing research with real humans. Just fucking smacks of an extractive colonial mindset in which people owe researchers their participation.

Research_FTW,
@Research_FTW@sciences.social avatar

@gregeganSF
Reminds me of this, why we need the humanities and social sciences. Everyone should get some training in epistemology. http://www.duckofminerva.com/2015/04/thinking-about-the-intellectual-future-of-higher-education.html

TonyVladusich,
@TonyVladusich@mathstodon.xyz avatar

@gregeganSF looks like the usual dimwitted neuro/psycho/cognito/computo babble to me

TruthSandwich,

@gregeganSF

This is the scientific version of streetlight effect. Just because fake people are easier to study doesn’t mean that there’s anything to be learned from the process.

ccppurcell,
@ccppurcell@mathstodon.xyz avatar

@gregeganSF this is not what Derrida had in mind when he said "there is nothing outside the text"

kurara_tequila,

@gregeganSF so glad to see you cross-post!

FinalOverdrive,

@gregeganSF It's a simulacrum of intelligence, not a simulation.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • rosin
  • magazineikmin
  • GTA5RPClips
  • khanakhh
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • thenastyranch
  • everett
  • osvaldo12
  • kavyap
  • cubers
  • DreamBathrooms
  • megavids
  • Durango
  • modclub
  • ngwrru68w68
  • vwfavf
  • ethstaker
  • tester
  • cisconetworking
  • tacticalgear
  • Leos
  • provamag3
  • normalnudes
  • anitta
  • JUstTest
  • All magazines