You know how most papers suffer in journal clubs? But probably the authors have answers for many of the criticisms? Well, one of the hopes in creating PubPeer ("The online journal club") was to be a forum where authors could defend their papers. So it's great to see Beth Stevens replying to some accumulated questions on her work about synapse-pruning by microglia.
Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)
The "research" is at times risible. Key experimental results appeared suddenly in a manuscript version upon which lab members were given a couple of hours to comment before submission to Nature.
"When the students asked Dias about the stunning new data, they say, he told them he had taken all the resistance and magnetic-susceptibility data before coming to Rochester."
Just nonchalantly sitting on proof of room-temperature superconductivity for a few years, as one does. /2
@BorisBarbour Interesting stuff! This highlights why it's so important to include the versions of all the software used for analysis. I find it so frustrating that this does not seem to be the standard across the board.
Even more egregious, some researchers don't even version their own code. I've spent quite a bit of time trying to convince them that this is critical. It's crazy to me!
How bad are citations as a metric? The world is already upside down. Clarivate can no longer produce a list of highly cited mathematicians. Obscure Chinese medical schools (without a maths department) are more cited than Princeton.
@BorisBarbour an interesting viewpoint in Jerry Muller's "The Tyranny of Metrics" (which I recommend everyone in academia to read). Metrics can be useful as a part of the process of diagnosing a problem, as long as they aren't used as a target and they're only the start of the process not a substitute for looking at specifics. Bibliometrics would be a useful tool to understand what's going on if they weren't also being gamed.
Obviously they are useless now, after having been gamed. But I'd say that citations were deeply unreliable even before they ware gamed. Papers with 1000s of citations turned out to be untrue. So not 1/1000 of those citations was a useful validation.