BorisBarbour,
@BorisBarbour@mastodon.social avatar

This investigation of Ranga Dias' superconductivity publications is remarkable for multiple reasons.

https://www.nature.com/articles/d41586-024-00716-2

Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)

Thread. /1

BorisBarbour,
@BorisBarbour@mastodon.social avatar

The "research" is at times risible. Key experimental results appeared suddenly in a manuscript version upon which lab members were given a couple of hours to comment before submission to Nature.

"When the students asked Dias about the stunning new data, they say, he told them he had taken all the resistance and magnetic-susceptibility data before coming to Rochester."

Just nonchalantly sitting on proof of room-temperature superconductivity for a few years, as one does. /2

BorisBarbour,
@BorisBarbour@mastodon.social avatar

The students are definitely not the villains of the piece, but if they "did not suspect misconduct at the time" and "trusted their adviser", they seem somewhat naive under the circumstances. /3

BorisBarbour,
@BorisBarbour@mastodon.social avatar

For the first paper, Nature engaged three referees and there were three rounds of review. One referee was strongly positive, the other two did not support publication. Nature went ahead anyway.

I can't think of a previous black on white example where Nature have admitted allowing impact to override quality, although that's always been the tacit implication of their editorial policy. And this is exactly the result they risk with that policy. /4

BorisBarbour,
@BorisBarbour@mastodon.social avatar

Nature did try to deal with the mess when they received complaints about the data, and they ended up retracting the paper. Gone are the days of stonewalling all such problems, and good riddance.

Procedures can certainly be improved, though. Co-authors (students) had been kept out of the loop during the investigation, which they only discovered when asked if they agreed with the retraction. Obviously they should have been contacted as soon as the investigation began. /5

BorisBarbour,
@BorisBarbour@mastodon.social avatar

Incredibly, Nature then accepted another paper from Dias about a new superconducting material.

EiC Magdalena Skipper: "Our editorial policy considers every submission in its own right”. That policy is shown to be dangerously naive.

Also: "decisions should be made on the basis of the scientific quality, not who the authors are." This paper was an embarrassing failure of evaluation. How do Nature aim to improve their processes? /6

BorisBarbour,
@BorisBarbour@mastodon.social avatar

Nature were not helped by the institution, Rochester University. They conducted four investigations and only the fourth identified any problems. During none of the first three were any of the students contacted! Simply pathetic. Nobody, including journals, should rely on (non-transparent) institutional investigations. /7

moritz_negwer,
@moritz_negwer@mstdn.science avatar

@BorisBarbour Good write-up, thanks. That's what I find most striking as well, Rochester University only got around to ask the grad students on the fourth investigation. What were they checking in the previous rounds, I wonder?

BorisBarbour,
@BorisBarbour@mastodon.social avatar

@moritz_negwer

Short-term and short-sighted management of reputation and finances.

nicolaromano,
@nicolaromano@qoto.org avatar

@BorisBarbour @moritz_negwer this is something I always find bizarre. Whenever universities investigate some misconduct (whether data fraud, harassment, or anything else) they're default position is to hide everything because it would be reputational damage. I have seen this many times and often students are the one who get the worst out of it. I for one would applaud a university who would actually engage in a proper investigation and take action if necessary. That would increase the university reputation, pretending nothing happened doesn't. 🫤

brembs,
@brembs@mastodon.social avatar

@BorisBarbour

For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions.

In the words of now infamous Declan Butler, "peer-review light": the non-peers are making the main decisions and the peers are relegated to the back-seats.

MarkHanson,
@MarkHanson@fediscience.org avatar

@brembs @BorisBarbour "For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions."

Isn't that how journals started, and how they're supposed to function? The role of reviewers is to advise the editor, not be the editor and make decisions for the journal.

If editors aren't supposed to make their own judgement calls, why have trained scientist experts be editors at all?

MarkHanson,
@MarkHanson@fediscience.org avatar

@brembs @BorisBarbour Sure, this sometimes gets you the Benveniste affairs of the world... That's what's happened here right? But that's built in to the system, which relies on good science winning out in the end. And it did that here also. So is there really a problem?

Nature's a private company. They're allowed to screw up, and we're allowed to judge the sum of their work and decide if their error rate is unacceptably high. Doing peer review is voluntary, we vote with our feet.

BorisBarbour,
@BorisBarbour@mastodon.social avatar

@MarkHanson @brembs

Does the policy pass the honesty test: would they publish if they had to post the referee reports alongside, with only a single positive one? I'm guessing no.

I think Rochester and the funders come out of this affair far worse than Nature. But there are plenty of things Nature can improve upon:

  • do more to resolve scientific issues between referees before accepting
  • bear in mind track records for quality/integrity
  • contact all authors in an investigation
MarkHanson,
@MarkHanson@fediscience.org avatar

@BorisBarbour 100% agree.

Re: "dangerous" - to who?

What sort of error rate should journals be allowed? Shouldn't we just let Nature accept the egg on their face and we all move on?

I guess if I summed my stance: science does not have a no-tolerance policy on being wrong. The issue here stems from giving undue weight to being 'published' as being 'true'.

This isn't some failure of the scientific method. As emphasized here, the scientific method doesn't end at publication.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@MarkHanson @BorisBarbour

Key issue is the system itself: publish a paper and pretend it's the ultimate truth on the matter. A system shift is needed to negate that assumption on published papers, and to instead more humbly publish results as the latest take on the matter, correct or not but hopefully constructive and insightful. A first step to that end is to stop using papers as tokens of academic currency weighted by the publication venue and for any evaluators to start reading the papers.

brembs,
@brembs@mastodon.social avatar

@albertcardona @MarkHanson @BorisBarbour

Precisely, Albert!

Some of us are old enough to remember the old tagline of Nature "the world's best science and medicine" - pretty much the opposite of what the data say (which may be one reason why they stopped using it) 🤣

I'd guess at this point, 30 years down the debate, most people with some competence probably agree that the system is FUBAR, like Albert says. That kind of concensus has been emerging on the last decade or so.

brembs,
@brembs@mastodon.social avatar

@albertcardona @MarkHanson @BorisBarbour

The consensus that we eventuelly will need to replace academic journals has only been emerging in the last 2-3 years and mostly here in Europe, more slowly elsewhere.

adredish,
@adredish@neuromatch.social avatar

@brembs @albertcardona @MarkHanson @BorisBarbour
@neuralreckoning

I think a lot of this recognition that we will need to replace academic journals soon has been the recognition that bioRxiv, psyRxiv, and medRxiv have not been the disasters many thought they would be*. I think a lot of people thought that peer review was critical to the success of the enterprise, and therefore we had to put up with the journals because we needed the peer review gatekeeping. However, it has become clear that (within field), labs can mostly do their own peer review.

It is not clear what we can do about science outside field. As a scientist how can I know whether to believe something outside my immediate field. And how should we control what journalists, politicians, and clinicians trust, given that they do not have the training to do their own "in-lab" peer review.

Nevertheless, importantly, now that we have preprint servers and can compare pre- and post-peer review, it is pretty clear that peer review isn't doing much, which gives us the ability to say that the costs (excessive publisher profits, reviewer time costs, etc) are not worth the gains.

  • Yes, I know, arXiv has been around for many many years. But people somehow thought biology, psychology, and the other non-physics fields were different. ¯_(ツ)_/¯
albertcardona,
@albertcardona@mathstodon.xyz avatar

@adredish @brembs @MarkHanson @BorisBarbour @neuralreckoning

Great point on the changing perception on preprints in the biological sciences. For me a published paper is always like a preprint – I read it with an equal amount of scrutiny – so I haven't noticed any difference with before and after the rise of preprints.

On the "outside field" point: I reckon this is an issue already now and has always been. Peer review is not at all a guarantee, as shown time and again for work that many care about (room-temperature superconductivity being the latest example); and a number of still unexamined peer reviewed studies wouldn't pass muster either if anyone bothered to look.

Journalists, unless they are themselves trained in the field, are limited to report what those in the field have commented. Politicians on the other hand are meant to trust at face value the reports from their specialists – the impact forecast presented in executive summary form – and evaluate them against other pressing needs in society to take, precisely, a political decision. Clinicians are perhaps lacking such counselling from specialists (and the void is filled by unscrupulous pharma companies), but in compensation, have considerable training themselves.

BorisBarbour,
@BorisBarbour@mastodon.social avatar

@MarkHanson @brembs

The experts they showed the reports to for this article shared your view and don't appear to have found the decision shocking.

Still, deciding to run with one positive report seems dangerous.

And your comment raises the interesting question of the level of expertise of the professional editors.

MarkHanson,
@MarkHanson@fediscience.org avatar

@BorisBarbour was in the middle of a 2nd post that maybe responds to that point :)

https://fediscience.org/@MarkHanson/112076157010161685

I've been thinking on this a lot recently... it's kinda messed up that many journals systemize the peer review recommendations in terms of "accept/reject." Like... reviewers are consulted for comments, not to do the editor's job. 1-2 whole generations of scientists has been raised with the idea that editors are just rubber stamps with little power. Is that really the way it should be?

tshirtman,
@tshirtman@mas.to avatar

@BorisBarbour weirdly the link is broken for me, while preview worked. Maybe the mastodon hug of death 💀

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • InstantRegret
  • DreamBathrooms
  • ngwrru68w68
  • osvaldo12
  • cubers
  • magazineikmin
  • ethstaker
  • Youngstown
  • rosin
  • slotface
  • everett
  • kavyap
  • Durango
  • khanakhh
  • megavids
  • thenastyranch
  • anitta
  • modclub
  • GTA5RPClips
  • mdbf
  • cisconetworking
  • tester
  • tacticalgear
  • provamag3
  • Leos
  • normalnudes
  • JUstTest
  • lostlight
  • All magazines