A question that came up in a recent (in-person) discussion: Has anyone compared the reproducibility or the validity of bioRxiv preprints with published journal articles? Are preprints less reliable than peer reviewed journal articles? It would seem we have enough examples now to check this.
I don't normally post political things on social media, but there is so much discussion of negative interactions and I want to show an example of a positive solution.
The University of Minnesota has reached a negotiated agreemennt with campus protesters through mutually respectful dialog with the protest leaders. The deal apparently includes agreements about opportunities to communicate divestment plans to the regents, discussions of ways to provide opportunities for advocacy to other students without limiting other students' individual opportunities, discussions to look into collaboration with Palestinian universities and an examination of current cross-university agreements, fully disclosing university expenditures, as well as amnesty and leniency. The protesters have agreed to remove the encampment and to not disrupt commencement ceremonies or finals. University regents and the university police have both agreed to abide by the agreement as well.
Now let's see how much the media report this as compared to the negative interactions at other schools.
The issue for me is that there are three goals to review. In my view, they are being confounded in this new system.
A communication to the editor to help them make the decision whether to put the paper in their journal or not. (Remember, this is the editor's choice, not the reviewer's. I have lots of examples of mismatches if anyone thinks otherwise - in both directions.)*
Certainly, one can argue that we shouldn't be using editor-limited journals for scientific discourse. That's a different discussion.
Helpful guidance to an author to make the paper better.**
**Yes, I admit, I am an artist and thus a perfectionist when I produce public products - I do not put out partial products. This means that I am not about to say "I have a vague memory of X" without going to the extensive effort of checking that my memory of X is right and finding the proof (which could be a lot of work). But I am very happy to say to the author "Check X" and to the editor "If X is wrong, don't worry about it.". My reviews are not public products.
Commentary for the public so that people can see the concerns that need to be considered when appreciating the paper.
I feel very strongly that 1 and 2 should be private, but 3 should be public. 1 and 2 are the purpose of review. 3 is the purpose of commentary. They are not the same.
PS. The reason for my initial post was because it turns out that a journal I agreed to review for does not have any mechanism to communicate to the editor separately. They also make the full reviews public (if the paper author wants, which seems unfair - it should at least be at the decision of both author of the paper and author of the review).
It's really unfortunate that reviews are seen as competitive and confrontational. I want my reviews to be suggestions to help the authors.
Because of that, I don't want the reviews I write made public. Reviews are for the editor to assess the author's work and for the author to make the work better. They are not commentary for the public to understand the work.
Fixing the paper is the author's job. Not mine. I want to be able to say "There was a paper a long time ago by someone like XYZ who you should go read." Or "There's a whole literature here by people like ABC to go engage with." Finding that literature is the author's job. Not mine. I want to be able to say "what if X is not linear?" or "what if Y happens?". I don't want to have to work out the math myself. That's the author's job. Not mine. None of this should be public.
Moreover, I need a process to say "I have the following concerns with my own review. I have this knowledge and not that. So you can trust me here, but not there." (Which deanonymizes the review, BTW.) I need a process where I can say to the editor, I'm concerned about A, but it's not my knowledge-base, or I think they should fix B, but it's not that important if they don't. Deciding what of that to communicate to the authors is the editor's job. Not mine.
Given the current ecosystem of publishing reviews and of preventing me from providing context to the editors separate from information sent to the authors, I am very seriously considering turning down all review requests from now on. (But it seems unfair to add publications to the ecosystem while not doing my part to review them.)
@knutson_brain@adredish but it is a competition nonetheless. Not every paper gets published and your reviews are a key part in determining which do and which don't. Not everyone gets to stay in science, and which papers get published is a key part in determining that. I think we all have to remember when we're writing reviews that this is what we're doing. And we don't have to. If you want to do cooperative review, announce publicly that you'll give private feedback on manuscripts if people send them to you. But don't review for competitive journals if you're not comfortable with the role of determining who gets a paper and who gets to stay in science, because that's what reviewing is about.
@neuralreckoning@knutson_brain@adredish Indeed. My attitude to publication peer review is possibly not aligned with any of the other participants. I'd be happy to review preprints and consider reprinting to be publication. (As an independent researcher with no access to an institutional library, I consider anything paywalled to be unpublished and would not cite it if I could possibly avoid it. I think paywalled papers are effectively "personal communication".)
For the one-year anniversary of the audiobook version of CHANGING HOW WE CHOOSE (The new science of morality) audiobooks is having a sale until 30/April.
New paper published: new study of decision making and psychosis. Drift diffusion models, attractor dynamics, and patients with psychosis doing the dot pattern expectancy task. Fascinating #computationalpsychiatry work by the #NeuroPRSMH team.
G. W. Diehl, A. D. Redish (2024) Measuring excitation-inhibition
balance through spectral components of local field potentials.
bioRxiv unreviewed preprint.
#Cosyne needs to decide if it is a journal or a conference. 😡 They ask for 2-page "abstracts", but then review the 2 pages as if they are a full paper. 2/3 reviews we got back included statements of "I don't understand why they didn't do X", where X was (a) Figure 5 of a 10-page paper ("abstract" included figure 1-3) or (b) a 3-year $500k project beyond the actual submission. The one good review was for a project that could be completely described in 2 pages. That's not an abstract.
@adredish I also feel it's disrespectful to the community to make people submit in a format that can't be reused and have a high reject rate. You're guaranteeing to waste a large amount of people's time like that.
Including a clinical tutorial from Martin Paulus, a computational tutorial from Angela Radulescu, and confirmed keynotes from Josh Gordon, Justin Baker, Roshan Cools, Damien Fair, and Anna Konova.
Save the date. Come present your work. Call for abstracts and other details to come soon.
An interesting consequence of the hypothesis that human social structures are built on "assurance" or coordination games rather than prisoner's dilemmas:
Coordination games have two stable states. If you are living in a world where everyone else is cooperating, it is in your best interests to cooperate as well. If you are living in a world of cheaters, cooperation is for suckers.
This means your perception of your community has a big impact on your own behavior.
While I agree that we do need things like @deevybee 's defense against the dark arts and @PubPeer and the enforced "share the data as is" regulations that @BorisBarbour has been talking about, I think we also need to make sure that we CELEBRATE openness, integrity, and we make sure that we report it to the world. We do not want all of our news reports to be about fraud.
So, can I recommend a policy? For every fraud that gets reported, find a positive success to talk about. I guarantee they are out there. In fact, I bet they are so common, we don't notice them.
All the people who share their code and fix the bugs that others find. The labs that say "come on by and we'll show you how we do stuff". The people who work with others to make their data useful and not just "out there". There are lots and lots of these positive examples. I worry they get lost because they are so common. We need a hashtag for celebratory cooperation in the sciences. I'm open to suggestions.
PS. For those who don't know it, the coordination game is structured so that for player A (given player B choice): C(C) > D(C) > D(D) > C(D), as compared to the prisoner's dilemma which is: D(C) > C(C) > D(D) > C(D).
In the coordination game, it is best to do what the other player is doing. In the prisoner's dilemma it is best to defect. There are n-player extensions of this as well.
How does pre-registration get to this question? Or are you answering a different thread?
Please check back and re-read the post that started this thread. I'm saying something very different about social media and perception and the creation of community.
Hi, I was answering @BorisBarbour post where he writes, quote, "In other words, researchers should be evaluated on their practices and not on the numbers of their publications and the impact factors of their journals?".
A really good paper on the nuances of "questionable research practices". Sometimes they're helpful and can actually increase reproducibility.
Reinagel, P. (2023). Is N-Hacking Ever OK? The consequences of collecting more data in pursuit of statistical significance. Plos Biology, 21(11), e3002345.