hdkarlsen, to random

The birdplace seems to stagnate, so I'm jumping ship. A short to me: I'm an associate professor of psychology in 🇳🇴 and work on personality psychology and psychometrics. Really just as an excuse to nerd out about R.

junesim63,
@junesim63@mstdn.social avatar

@hdkarlsen Welcome. If you follow @feditips and @FediFollows you'll get lots of advice on the Fediverse. Also if you put lots of hashtags for your interests in your intro post you'll get more followers too.

ByrdNick, to psychology
@ByrdNick@nerdculture.de avatar

We know that the task demands of cognitive tests most scores: if one version of a problem requires more work (e.g., gratuitously verbose or unclear wording, open response rather than multiple choice), people will perform worse.

Now we have observed as much in Large Language Models: https://doi.org/10.48550/arXiv.2404.02418

The tests included analogical reasoning, reflective reasoning, word prediction, and grammaticality judgments.

image/jpeg
image/jpeg
image/jpeg

ByrdNick, to psychology
@ByrdNick@nerdculture.de avatar

Does refer to just one thing?

There seem to be two (related) notions:

Curiosity1: goal-directed information seeking — e.g., following a string of citations to find the source of a particular claim.

Curiosity2: exploratory information seeking — e.g., watching whatever explainer video is recommended next, even if it’s about a different question or topic.

Metcalfe & Jacobs: https://doi.org/10.4324/9781003009351-6

image/png
image/png
image/png

pgmj, to random

New preprint on criteria for "valid & reliable" measures, and reporting guidelines https://doi.org/10.31219/osf.io/3htzc

Also, we present an package to simplify psychometric analysis with and combine analysis code and documentation with https://pgmj.github.io/raschrvignette/RaschRvign.html

bud_t, to random
@bud_t@m.ai6yr.org avatar

I'd like to see an actual argument for the CLT. I imagine it doesn't exist.

Florida accepts conservative and Christian-backed test as alternative to the SAT : NPR
https://www.npr.org/2023/09/10/1198638538/what-to-know-classic-learning-test-florida-sat-act-colleges

jrboehnke, to random
@jrboehnke@mastodon.social avatar

As a -er and the software I most frequently recommend is actually FACTOR:
https://psico.fcep.urv.cat/utilitats/factor/Description.html

Reference: https://journals.sagepub.com/doi/10.1177/0146621613487794
Exploratory factor analyses remain a widely used tool in my research area. Most commercial software is really not great at it, and may be too steep a learning curve (although I think one rarely regrets it 😅 ).

odr_k4tana, to random

community: I found a paper that developed a short scale and tested it via and clustering. (Paper here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0281021). Now for me, this is odd as it uses two clustering techniques to assess scale quality. But then again this is a sociology paper and I know that sociologists and psychologists have a different world view. In case you didn't know: Sociologists tend to look at groups within society or societies at large, whereas psychologists tend to see individuals and groups as aggregates of individuals. Obviously, coming from a sociological perspective, using such clustering methods makes sense. However, I still have mixed feelings about this approach. I still feel a IRT approach would be better since obviously k-means and LPA does NOTHING to evaluate items, for example.

How do you see this? Am I completely wrong here?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • Durango
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • khanakhh
  • osvaldo12
  • tacticalgear
  • JUstTest
  • cubers
  • everett
  • Leos
  • tester
  • mdbf
  • ngwrru68w68
  • ethstaker
  • modclub
  • megavids
  • provamag3
  • cisconetworking
  • normalnudes
  • lostlight
  • All magazines