@HuMetricsHSS will hold a #workshop to envision the future of Academy. It will be carry a virtual event on May 9 and 10. It seeks to include people working on transforing academy:
@petersuber@ERC_Research Should have been dropped decades ago. Perhaps that would have saved us from algorithmic social media, which is clearly modelled after science's publication metric system...
@petersuber Happy BOAI Birthday to you too, Peter.
I think often of your remark, made in an interview with Richard Poynder, about seeing "a chance to be more useful", and wish I knew how properly to thank you for your work, and your example.
PS: We can all list some of the factors at work here, and we should. My start: Paying more attention to where a work is published, and the fact of publication itself, than to the quality of what is published.
@timelfen
Yes, that's right. Sorry if it wasn't clear.
On we/they: Universities, funders, and researchers could all do better in paying more attention to the quality of research than where it's published.
The goals of research assessment create incentives to game the system. The goals should be high-quality research, not publication in high-prestige or high-impact journals, and not high scores on simplistic metrics, esp. journal-level metrics.
@petersuber if this were a development team, i would treat the problem as the authors not being held responsible for their quality (or lack thereof). i’d make some policy changes that make the authors feel the impact more. but who knows, maybe research is different
another reaction is to sharpen reviews. the trouble is reviewers don’t have as much context, so they miss things that are more hidden. also, tackling it in review tends to slow down progress
"The University of Tokyo became the first Japanese university to sign the San Francisco Declaration on Research Assessment (#DORA, @DORAssessment) on 1 December 2023."
PS: They don't discount open access, open data, and open code (or open protocols, preregistration, and so on). They take a step back and ask what strategies will advance them.
@Lupus
I agree. Open-research people in the humanities don't think the term "open science" covers what they do. "Open research", "open knowledge", and "open scholarship" are more inclusive terms. On the other hand, the counterparts of "open science" in many non-English languages are more inclusive than "open science" is in English.
"Importantly, being an author is not only about credit but also about accountability. Typically, an author will be accountable for the quality and integrity of their own contribution, but also for the work as a whole by ensuring that questions arising post-publication are investigated thoroughly and that materials and data remain available."
I don't understand why some institutions are obsessed with conference acceptance rates. Is there any evidence that the acceptance rate works well as a proxy for #academicquality?
@arjensenius Yep! It just stands for masses of wasted time, when things are dropped for minor flaws that could easily have been improved before the final version. Surely the true value of peer review is that it results in better work through feedback, not judgment. Conferences should aim for as high an acceptance rate as possible!
The live coding conference dropped below 50% this year and we see it as a big problem to address through introducing alternative means of presentation. Research should be shared!