BorisBarbour

@BorisBarbour@mastodon.social

Neuroscience (CNRS, ENS); pubpeer.com; peeriodicals.com; referee3.org. Views my own. Inactive https://mastodon.social/@BorisBarbour on Twitter

This profile is from a federated server and may be incomplete. Browse more on the original instance.

BorisBarbour, to random

An appeal to the authority of @lakens on PubPeer (last comment of the thread). Even a one-way ANOVA is not always so simple.

https://pubpeer.com/publications/B1314681B75686D59552C160A36B82

BorisBarbour,

@lakens

I guess this is (implicitly) an example of "what biologists want"...

A test comparing means or medians without inflation of the false positive rate for groups that can be unbalanced, small (n = 3 is common), have unequal variance (some SD ratios beyond 10 in this case), non-normal and possibly from different distributions.

Loss of power would be acceptable - that would be the authors' problem. We already have enough false positives for a generation.

BorisBarbour,

@lakens

Oh, with extensions to multifactorial designs including repeated measures, please!

BorisBarbour,

@lakens

Semi-seriously, the fact that loss of power could be acceptable in the extreme cases might offer some flexibility. Simply use the largest group variance instead of trying to construct a clever pooling procedure?

BorisBarbour,

@lakens

There are, err, some very low-hanging fruit elsewhere... And you might get a consulting gig at Nature!

There are scenarios - elucidating complex preclinical mechanisms - where the critical issue is not so much accurate estimation but avoiding false positives, because you have elaborate logical constructions in which multiple elements must be true for the final conclusion to hold.

BorisBarbour,

@lakens

It would be so useful to have a one-stop shop: "it's true (at given alpha) if it passes this test". If it fails but the authors wish to insist, up to them to do more work and justify an approach with better power.

BorisBarbour,

@lakens

Of course, making it harder to publish unreliable research might not be the quickest route to universal popularity.

BorisBarbour,

@lakens

Haha - condemned to my fate, shall we say.

For sure, the average psychologist has a much better grasp of statistics than your average cell biologist (just one example field). I wrote this old blog post in reaction to frequent cases observed in glamour journals. Nothing has changed.

https://referee3.org/2019/03/09/how-to-cheat-at-stats/

It's hard to avoid the impression that in some labs people blindly click options in Prism until they get p < 0.05. Referees and editors seem completely oblivious to the issue.

Private
BorisBarbour,

@mwfc @bensb @academicchatter

Document on PubPeer?

BorisBarbour,

@bensb @academicchatter

Definitely worth a try, if only for editorial education. (I think the editorial enforcement procedure has been there since they introduced this policy.)

A propos, an old blog:

https://referee3.org/2019/12/25/data-sharing-should-be-mandatory-public-and-immediate/

BorisBarbour, to random
BorisBarbour, to random

EiC of Science, Holden Thorp, on how to progress on matters of research integrity.

https://www.science.org/doi/10.1126/science.adk1852

I would add: just require and share the underlying data as early in the process as possible. There is nothing defamatory about data, and I'm sure many cases would resolve much more quickly with the extra sunshine.

BorisBarbour, to random

Plus ça change...

Ivan Oransky and Adam Marcus expect 100k retractions per year.

https://www.theguardian.com/commentisfree/2023/aug/09/scientific-misconduct-retraction-watch

fulelo, to random
@fulelo@journa.host avatar

- Elon Musk says X will fund legal bills if users treated unfairly by bosses
https://www.bbc.co.uk/news/technology-66420177

BorisBarbour,

@fulelo

And if you believe that, you really will believe anything.

Sheril, to science
@Sheril@mastodon.social avatar

Born in 1928, Vera Rubin set her sights on Princeton, but they wouldn’t accept female grad students in astronomy. So she earned her master’s from Cornell & PhD from Georgetown.

In 1965, Rubin became the 1st woman allowed to observe at the Palomar Observatory. She went on to find evidence for the existence of dark matter.

In 1993, Rubin was awarded the National Medal of Science. But curiously, she was not awarded a Nobel Prize. https://www.themarginalian.org/2016/04/18/vera-rubin-interview-women-in-science/

BorisBarbour,

@Sheril

Curiously -> shamfeully, scandalously.

adredish, to random

Interesting: eLife is following through on their "publish with reviews that say the paper has serious problems". To be fair, this is exactly what eLife said they would do. So, props for that.

https://elifesciences.org/reviewed-preprints/89106#tab-content

I was interested to see how this would shake out, particularly in a field that I am not (at all) an expert in, so I went reading....

From a "I have infinite time and I like to read stuff and I'm well-trained in critical thinking", this is kinda fun. From a "I have to judge this person along with the 50 others applying for this job" or "I have to judge the biosketch along with the other dozen grants I have to review." position, I'm far less sanguine about this. It definitely means that we cannot use eLife as a simple published gate keeper anymore. I'm not sure how I will take eLife papers that I see on CVs.

On the other hand, after the last study section I was on, where some reviewers took everything listed as a "paper" (including middle author conference presentations) and some reviewers didn't count anything that wasn't a first-author product in a recognizable journal, it is clear that our metrics are really broken anyway.

(BTW, I got to this paper from science twitter, which seems to be still working well for me, which is also an interesting data point.)

BorisBarbour,

@adredish

I think the point is that it will at least be necessary to read the editorial summary of eLife papers, which can already add a bit of nuance (even if this sort of repudiation will be rare). Of course, you should also check PubPeer etc...

BTW I saw it here from several sources, so thankfully not missing Science Twitter...

Carl_Zimmer, to random
@Carl_Zimmer@mastodon.social avatar

Spicy open peer review on the new Homo naledi papers over at eLife: https://elifesciences.org/reviewed-preprints/89125v1/reviews#tab-content

BorisBarbour,

@albertcardona @Carl_Zimmer @eLife

It will be interesting to see how this paper will be considered going forward.

lakens, to random
@lakens@mastodon.social avatar

"Science, it is said, is self-correcting, but in the behavioral sciences especially, it corrects only very slowly." Rosenthal, 1966

BorisBarbour,

@lakens

One funeral at a time...

elduvelle, to Neuroscience
@elduvelle@neuromatch.social avatar

Just saw a new article in “Applied Sciences”, a MDPI journal and I would like to advise everyone not to publish there… the paper is full of inaccurate statements and I’ve heard that MDPI was kind of predatory. Happy to hear any contradictory opinion though.

BorisBarbour,

@elduvelle

"kind of predatory"...

https://www.mdpi.com/special-issues?sort_name=title&sort_type=ASC&key=

You can navigate to

"Displaying special issues 168601-168700 on page 1687 of 1687. "

BorisBarbour,

@elduvelle

Yeah, nearly 168000 special issues. And, yes, Hindawi at least have the same scam.

BorisBarbour,

@elduvelle

I don't know that. But the most profitable (open access) journals are those that spend the least on editorial oversight and quality assurance. It seems an irresistible economic force and I'm sure the majors are not immune. After all, Wiley presumably bought Hindawi for the profits.

richardsever, to random

"Agents of chaos could create thousands of AI-generated papers and submit them to scholarly journals, flooding the peer-review system. Even if 99% could be detected and rejected, the remaining 1% would erode the integrity of the literature"

Fascinating piece on how publishers can fight misinformation by @ct_bergstrom https://www.nature.com/articles/s41591-023-02411-7 1/2

BorisBarbour,

@richardsever @ct_bergstrom

Every "professional" researcher should be validated to the journal by their institution via ORCID. Then, at least, when problems are found they can be pinned - permanently - to real authors.

Citation farming is the "perfect crime" if no real authors are required. After all, it's hardly a crime to be cited.

BorisBarbour,

@ct_bergstrom @richardsever

I know, but there is an option for an institution to validate your profile, I believe. If journals required it for publication, it would happen.

eLife, to Neuroscience

New “tour de force” research demonstrates the capacity of deep brain stimulation to alleviate depression in rodents. https://elifesciences.org/articles/84693?utm_source=mastodon&utm_medium=social&utm_campaign=organic

BorisBarbour,

@eLife

Does a simple spreadsheet for the behavioural data really satisfy the eLife data availability policy?

augieray, to random
@augieray@mastodon.social avatar

I continue to track the studies showing all the ways repeated infections harm brains, immune systems and other organs. Now, if that was the case, surely we'd see signs of it, right?

Like, if kids' brains were being harmed by ongoing COVID infections, we'd see something like math and reading scores still plunging to their lowest levels in decades, even though kids have been back in classrooms for two years, right? https://apnews.com/article/math-reading-test-scores-pandemic-school-032eafd7d087227f42808052fe447d76

(1/5)

BorisBarbour,

@augieray

^ Intriguing. Trying to conceive of confounding variables is also interesting.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • InstantRegret
  • mdbf
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • osvaldo12
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • JUstTest
  • tacticalgear
  • ethstaker
  • provamag3
  • cisconetworking
  • tester
  • GTA5RPClips
  • cubers
  • everett
  • modclub
  • megavids
  • normalnudes
  • Leos
  • lostlight
  • All magazines