jonny, (edited )
@jonny@neuromatch.social avatar

One thing that sucks about being so broken and a vector of domination rather that cooperation is that, in the best case, they can be skillshares as much as anything else. In some code reviews I have given and received, I have taught and learned how to do things that I or the other person wished they knew how to do, but didnt.

That literally cant happen in the traditional model of review, where reviews are strict, terse, and noninteractive. Traditional review also happens way too late, when all the projected work is done. Collaborative, open, early review literally inverts the dreaded "damn reviewers want us to do infinity more experiments" dynamic. Instead, wouldnt it be lovely if during or even before you do an experiment, having a designated person to be like "hey have you thought about doing it this way? If not i can show you how"

The adversarial system forces you into a position where you have to defend your approach as The Correct One and any change in your Genius Tier experimental design must be only to validate the basic findings of the original design. Reviewers cannot be considered as collaborators, and thus have little incentive to review with any other spirit than "gatekeeper of science."

If instead we adopted some lessons from open source and thought of some parts of reviews as "pull requests" - where fixing a bug is somewhat the responsibility of the person who thinks it should be done differently, but then they also get credit for that work in the same way that the original authors do, we could
a) share techniques and knowledge between labs in a more systematic way,
b) have better outcomes from moving beyond the sole genius model of science,
c) avoid a ton of experimental waste from either unnecessary extra experiments or improperly done original experiments,
d) build a system of reviewing that actually rewards reviewers for being collegial and cooperative

edit: to be super clear here i know i am not saying anything new, just reflecting on it as i am doing an open review

jaybaeta, (edited )
@jaybaeta@mastodon.social avatar

@jonny I'm just spitballing based on two hours of sleep, but I think one way designated reviewer doesn't work is that it relies on the scientist's network, and many (especially introverts and non-westerners) have limited networks. I like there being, like, a roving expert in x procedure who just advises people on it, though. A lot of methodological mistakes are made by students, which are only caught after submission by a competent reviewer.

jaybaeta,
@jaybaeta@mastodon.social avatar

@jonny Also, I conjured a git-based journal a couple years ago that was based on the idea of authors and reviewers submitting pull requests, making it a 100% transparent journal. Didn't go past experimentation, partly because git is annoying and a friend pointed out a Wiki journal would be easier. (I think you've written something similar, but am too tired to recall right now). But conceptually, an open github pages (or equivalent) site where every step is publicly recorded is quite elegant.

jonny,
@jonny@neuromatch.social avatar

@jaybaeta
Yes!!! There is a porous boundary between git-like and wiki-like things that are mostly differences in interface, and YES!!! they are extremely powerful

jonny,
@jonny@neuromatch.social avatar

@jaybaeta
Totally. Thats the benefit of systems of public review like @joss and @pyOpenSci - I personally had precisely zero network in open source software outside members of my grad cohort (who were lovely and we fended for ourselves!) for the first 5 or so years I was learning to program, and I would have LOVED a venue to show work in progress and get formal input from roving experts (as is the purpose of those orgs).

The first time I had code review from someone who knew what they were doing was an earth shattering event that completely changed how I work and think about software, and I want that for anyone who needs it, in any discipline, anywhere.

jaybaeta,
@jaybaeta@mastodon.social avatar

@jonny @joss @pyOpenSci It would be fantastic to see in ecology, honestly, now that you're mentioning it.

I feel terrible for students (it's always students with inattentive professors) who go to all the effort of conducting grueling field work, and then are blindsided by someone telling them they used the wrong method. And they just don't see the problem until someone who knows sees it.

zackbatist,
@zackbatist@archaeo.social avatar

@jonny Sounds a lot like the role of a doctoral supervisory committee, in some ways

jonny,
@jonny@neuromatch.social avatar

@zackbatist
Ya but like what if peers

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny yes absolutely! It's amazing how much of science as it is practiced is the exact opposite of how it should be, and would be if we didn't try to pretend it was a competitive capitalist marketplace.

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning and also how obviously and immediately good it feels to do something that makes sense

modrak_m,
@modrak_m@fediscience.org avatar

@jonny To me, it seems registered reports solve a tiny part of this and are thus a modest improvement, when they are applicable. Would you agree or do you see some extra problems caused by them (putting aside that registered reports are very much within system, tame incremental change, not a general solution)

jonny,
@jonny@neuromatch.social avatar

@modrak_m
I deleted a para on registered reports and preregistration here. To me they are more revealing about the need for open and early review than anything. They try to solve problems that are all better solved by making the work public and open to input sooner. They fail because they presuppose traditional journal publication as inevitable and try and work around it.

Imo they are half measures that were interesting but are now just cloying reminders of how badly we need to break the publishing oligopoly.

jonny,
@jonny@neuromatch.social avatar

@modrak_m
Like what are their basic criticisms:
Registered reports still are gatekeeping, you just get the feedback much faster. They try and solve problems of selection bias and file drawer not by lowering the barriers to publication but by changing the selection criteria to "what seems like it would be splashy enough for a paper." Its goals are trivially solved by just being able to Post Works In Progress and Self Publish Anything.

Prereg tries to fix problems with garden of forking paths, researcher degrees of freedom, p hacking, whatever you want to call it. The criticism with prereg is always like "well how do we know the original strategy is any good just bc it was the first way they thought of," and also "what do you do when the analysis changes from prereg to paper" - any change can be justified, or else you exclude the natural process of learning from a work in progress and even of review. In any case except "prereg matches paper and got a nice result!" You are effectively in the same place as you were before. All those problems are also better addressed by continual review of a public project.

Thats to say nothing of the framing that changes in the publication system should be juridical, new means of policing each other, rather than ways of changing the value calculus of communication and review from adversarial to cooperative, but I am Still Longposting like an hour later at this point instead of finishing the thing I was supposed to do today.

jonny,
@jonny@neuromatch.social avatar
jonny,
@jonny@neuromatch.social avatar

Eg. Right now im reviewing some of @k4tj4 and @r3rt0 's work which I love as a rule (seriously if you havent seen it, check out brainbox, reorient, etc. Literally all their stuff rocks. Even and especially the non-neuro old webtech heads should check it out.). I am learning a lot from how they structure these reasonably complex tools that handle loading neuroimaging formats, render, manipulate, export data using nothing but pure vanilla JS - no frameworks, no packages, npm only used for testing and linting.

I also opened a non-critical discussion like "ok this rocks, but what about also finding some middle ground where we can get some benefit from build tooling for maintainability and to make them offline-friendly" https://github.com/neuroanatomy/thresholdmann/issues/23

Thats not intended to be something that is a prerequisite for review, but a chance to look back on a lot of really excellent work together, colleagues to colleagues, and check in on if there's any changes to make in the broader architecture of work that typically is never part of a review. If they are curious what I mean (havent spoken yet), I would then write a PR to demonstrate what im talking about, sharing a technique and contributing to the work as is super normal already in open source.

jonny,
@jonny@neuromatch.social avatar

Also - how amazing is it to like... be able to TALK ABOUT WORK thats under review instead of having it be some secret because of the absolutely LUDICROUS idea of getting "scooped" which has absolutely NO PLACE in science. Again in the best case, we could imagine an "it takes a village" kind of review where we can talk about it in public, other people can wander in and out, there are some people assigned to ensure some minimal checklist of requirements but we can benefit from anyones input.

How much richer would review be from there being robust, critical and cooperative, public discussion THROUGHOUT the lifecycle of the experiment? By the time something reaches its 1.0 status of publication, it could have been seen in detail by a dozen people in the field and have their contributions creditable alongside the work. It almost makes TOO MUCH SENSE

jonny,
@jonny@neuromatch.social avatar

Think about the difference between opening an issue to report a bug vs. Submitting some review on pubpeer about a critical flaw in a published paper.

One is welcome, at worst a "friend telling you about food stuck in your teeth" correction that you cant see but other people can. (Sure some people raise issues like assholes, but solving people being assholes isnt really in scope). The other is potentially career ending, always confrontational even if we want it to be constructive, the amount of times it has ended up well is a vanishingly small fraction of all criticisms like that.

Whats different? Papers are like software in that they always have some bugs somewhere, are embedded in a moving field that can shift underneath it, they make some claims on reality and either deliver or dont. They are different in that papers arent expected to be amended, often thats literally impossible. Review is "done" when a paper is published. Review happens at the "end" of a paper rather than throughout its life. You cant deprecate a paper - "this was good at the time, but we have updated our understanding." All the things that make it so someone suggesting something you could fix being an act of kindness and collaboration are forbidden by the traditional review and publishing. (A lot more subtlety but im longposting on a microblogging medium not writing an encyclopedia rn)

A recent example im thinking of is this issue I raised with @linkml that was the underlying cause of a decent number of hard to diagnose bugs:
https://github.com/linkml/linkml/issues/1839

Did that invalidate some claims made in the docs and published lit re: importing and extensibility much like errata in a paper? Yes! Was it a huge deal, fatal to the reputation of the project? Absolutely not!!! In fact a noisy bug tracker is a sign of HEALTH. there isnt a reply there because I discussed it in person with the devs later, and theres a relatively obvious fix to be made, but I meant it in cooperation, as a compliment - I care enough about your work to investigate why it isnt working - and it was received as such. Im working on a patch now. Colleagues working together to make something work!

If youve ever criticized a paper before, you know that simply is not the way it goes. I have a lot of people who I love dearly who do a lot of work I think is fatally flawed, but I wouldn't dare criticize them because it could put their career in danger. They arent bad or stupid people, but their lab does things in a particular way, etc. And so everyone loses out: the work is worse, the work is stuck in time and becomes irrelevant within discipline and misleading outside of it, the labs never improve, the working conditions never improve, we create protective silos around ourselves to survive.

jonny, (edited )
@jonny@neuromatch.social avatar

People always jump all over my ass when I talk about open review to say "think of the ECRs who cant raise criticism" or "think of how the powerful labs would abuse that power" and I am always like dog I know, what im talking about is an ANTIDOTE to that problem which is made much WORSE by closed review.

Why does criticism have to be something you'll get retaliation for instead of welcomed as a contribution? Why is it possible for senior researchers to end peoples careers in private 3-party reviews instead of having to show their whole entire ass in public? Open review isnt just keeping everything the same but copy pasting the reviews on the website at the end - its an invitation to change the cultural coordinates of review itself from something broken and shitty to something with a plausible chance of being healthy.

jonny,
@jonny@neuromatch.social avatar

For the sake of symmetry, see @chrisXrodgers continually reminding me how the code I wrote as a little nublin is very brittle and breaks a lot. It is an act of care!!!
https://github.com/auto-pi-lot/autopilot/discussions/208

jonny,
@jonny@neuromatch.social avatar

Check this out another benefit of open review is it is literally free to do. I summon the harshest form of "younger generation coming for the boomers" scorn I can when I say "traditional peer review is cringe." Senior academics will literally make a whole secretive hierarchical cabal to pass final judgement on other people instead of going to therapy*.

When people sincerely congratulate each other like "congrats on the nature paper" it gives me a full body cringe - like YUCK yall think its actually cool that someone was forced at pain of losing their job to do shit in this laughably backwards way and also PAID for it?

I think theres something to be said for anyone with social or professional clout just loudly and proudly doing open review and being like "thats nice, cool h-index boomer, but over here we are doing stuff that makes sense and feels good to do"

  • considering any other form of communication or mode of horizontal organization. And also probably going to therapy
jonny,
@jonny@neuromatch.social avatar

To be clear I GET IT and that is why I ALWAYS congratulate my peers on their work even IF they were forced by their enculturation or material circumstances to publish a most hated CNS paper. The point here is not to criticise anyone in particular or shame people who need to eat, but to say that anyone who can do anything else, here is why you might.

I guess also to kill dead the idea of "prestige publishing is for the children" - you look out for the trainees by making a better world for them, leading from the front, using your accumulated social capital to influence the direction of the system for the better. Managing a Nature paper factory staffed by grad students "for their benefit" is on its face cowardly behavior. Both in the sense of failing to reckon with your role in guaranteeing them a future of even worse precarity, but also because it betrays a much deeper appeal to authority indicative of needing a violently extractive publication system to protect the validity of the work rather than letting it speak for itself.

B cool and burn it down, thats whats for the children.

chrisXrodgers,
@chrisXrodgers@neuromatch.social avatar

@jonny Ha, full disclosure, that issue was posted by Rowan in my lab. You are getting acts of care from multiple sources! 😂​

I remember someone posting a long time ago a vision of peer review in which, during the first round, the reviewers would just ask questions. No assertions or statements, just ask questions, to start the process of reconciliation

chrisXrodgers,
@chrisXrodgers@neuromatch.social avatar

@jonny Lately I have been starting to think that, given the existence of preprints, publishing papers might essentially be a selfish act. The info is already available in the preprint for anyone to read. The cost of publishing could probably be spent better on someone's salary. The only benefit from the pub accrues to the authors' CVs.

Peer review can provide some value, but this is unpaid labor from the reviewers, not the authors' contribution (and certainly not the journal's).

Another "externality" associated with publishing is just continually inflating the scientific literature, which is already way larger than anyone can read. We all spend so much time cranking out pubs that we have no time to actually read anyone else's work - who is even reading most of these things?

Like other self-advancing acts (eg carbon emissions), publishing could sometimes still be justifiable, it should just be done in moderation and in acknowledgement of its negative effects. Concretely .. I would like to see us all collectively take a deep breath, and publish fewer, more thoughtful papers.

Ok, enough said, now I really have to get back to pumping out some papers to make sure I get my grants and can pay my staff!

jonny,
@jonny@neuromatch.social avatar

@chrisXrodgers
I would like to see us resolve these negative externalities so that yes, we can write when we need to write and no more :)

jonny,
@jonny@neuromatch.social avatar

@chrisXrodgers
Thats such a good model for any kind of discussion, big fan :)

johannes_lehmann,
@johannes_lehmann@fediscience.org avatar

@jonny
Open signed peer reviews seem a good solution for the “reviewer tries to sabotage author” scenario but a problem for the “author takes revenge on reviewer” scenario. I see few downsides in publishing reviews and even editorial decisions. But I can see clear drawbacks (next to benefits) for reviews to be signed (non-anonymous). Sure, constructive criticism should be welcomed - but that is not how humans (including me) always react to it - and that reaction won’t be public.

jonny,
@jonny@neuromatch.social avatar

@johannes_lehmann totally hear you, but if the role of a reviewer isn't to approve or deny the validity of science, you change both the risk to the reviewer and the motivation for retaliation from the author. once you take the gatekeeping off the table, then even very harsh criticism just isn't the same kind of ego threat

so consider the 'worst case' scenario for this kind or problem (plz amend this scenario bc i'm surely missing some part of 'the riskiest scenario'): say I as a vulnerable scientist raise a clearly (externally) valid criticism that cuts to the core of some work from a high-power scientist in an open review context that renders the whole thing meaningless. they have the opportunity to respond, and they yield no ground, blustering and flexing position, refusing to even entertain my criticism and questioning my credentials. We go back and forth for several rounds and get nowhere. My review is basically "here is the conversation, I think the work is fundamentally flawed, the authors disagree, you are welcome to make your own conclusions." The work is still "published" (in the sense that it is always already published in open review), and the authors have an equal space to me to make their point and defend their work in case I'm just really wrong with my criticism.

Say later they try and threaten me, assassinate my character over my review even if it didn't prevent the work from being "published." That is something that is also a valid part of a review imo, and one strategy might be to issue an amendment to my review saying "after this review process, the authors did xyz inappropriate actions." There's no solving backroom politics like senior academics sabotaging my professional network, but open review only raises the risk and the potential damage of doing that kind of retaliation. In the current system, abusive senior ppl are able to thrive precisely because they have 'country club' review processes that don't allow for serious public criticism and don't surface abusive behaviors. IMO there is no separating the science from conditions it's produced in, especially when the conditions include intimidating and harassing reviewers, so that's very in bounds for the contents of a review.

jonny,
@jonny@neuromatch.social avatar

@johannes_lehmann my review being public and signed is also a chance for me as vulnerable researcher to get my name in the conversation - if i'm raising valid criticisms, others in the field will notice, and also see the bad behavior of the senior person. with anonymous review, i have no incentive and no reason to give valid criticism in the first place, and especially so if there's blowback. open review also changes the reward system to value risky (valid) criticism and disincentivize abuse.

If i'm raising invalid, obnoxious, incorrect criticisms, then that's also worth knowing too, and the authors have a space to respond, but the downsides of that are roughly equivalent to writing a 'bad early paper' - i can amend my review later to say "after i thought more about this, i amend my criticism, my bad."

jonny,
@jonny@neuromatch.social avatar

@johannes_lehmann not saying open review solves all problems, but i do think that it would be net positive on the "senior researchers abuse vulnerable researchers via the review process" problem

r3rt0,
@r3rt0@mstdn.social avatar

@jonny @k4tj4 Hey!! Thank you :D
It's really nice to have you as a reviewer. Our shared aim is to make better tools for people, and your feedback is very important.
For the history: we have a bunch of tools in the lab that were made when apple moved to objective-c. They were super difficult to share (and to maintain). At some point, we moved to javascript and the web, so that people would only have to go to a url to start using our tools
1/

r3rt0,
@r3rt0@mstdn.social avatar

@jonny @k4tj4 Plus, if everything works as it should, they get the most up-to-date version every time they connect.
It would be wonderful to have a nicely packed version that would work offline! Looking forward to that PR :)

jonny,
@jonny@neuromatch.social avatar

@r3rt0
@k4tj4
I love it, I think yall are dead right on that, and I again love the way yall work local first, no hype javascript and it just WORKS, is responsive, and I also love the stylesheets ya come up with. I think the "many small tools" approach is perfect too. This is arguably the browser at its best as a computing environment.

Im gonna see if I can help out by giving just a couple of ideas for how to make reusability between tools a touch more maintainable, but its absolutely not from a place of criticism and more from a place of "how can I help"

richard,
@richard@noctalgia.space avatar

@jonny Javascript is "hype." Thank you. I took a class on javascript back in the day and I hated it. I was not impressed. That being said, I hated PHP, too, but at least I could respect that.

jonny,
@jonny@neuromatch.social avatar

@richard
Ok, well what i was referring to is the profusion of brittle and heavy frameworks, but you are ofc allowed your opinion of the language and standards. I think as a matter of practical use, js as a programming environment for documents in browsers is pretty much a perfect fit for the goals of the above project, and is the lowest barrier way to put code like that in peoples hands.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • Durango
  • DreamBathrooms
  • InstantRegret
  • tacticalgear
  • magazineikmin
  • Youngstown
  • thenastyranch
  • mdbf
  • slotface
  • rosin
  • Leos
  • kavyap
  • modclub
  • ethstaker
  • JUstTest
  • everett
  • GTA5RPClips
  • cubers
  • khanakhh
  • ngwrru68w68
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • normalnudes
  • lostlight
  • All magazines