@adredish@neuromatch.social avatar

adredish

@adredish@neuromatch.social

Scientist studying learning, memory, and decision-making. Poet and Playwright.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

Summer reading recs?!

I’m looking to compile a big pile of IRL books for summer beach reading.

Big fan of scifi, specfic, murder mysteries. Strong female characters = bonus points. Can’t deal with darkness. Nothing high brow or tedious. Nonfiction lover but that list is already bursting at the seams.

I also love long series - 3 body problem, wheel of time, foundation … those waves all have good memories attached.

If that brings anything to mind, please send the rec!!

adredish,
@adredish@neuromatch.social avatar

@NicoleCRust Have you tried Ann Leckie's Ancillary Justice trillogy? Or Arkady Martine's Memory of Empire?

BorisBarbour, to random
@BorisBarbour@mastodon.social avatar

This investigation of Ranga Dias' superconductivity publications is remarkable for multiple reasons.

https://www.nature.com/articles/d41586-024-00716-2

Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)

Thread. /1

adredish,
@adredish@neuromatch.social avatar

@brembs @albertcardona @MarkHanson @BorisBarbour
@neuralreckoning

I think a lot of this recognition that we will need to replace academic journals soon has been the recognition that bioRxiv, psyRxiv, and medRxiv have not been the disasters many thought they would be*. I think a lot of people thought that peer review was critical to the success of the enterprise, and therefore we had to put up with the journals because we needed the peer review gatekeeping. However, it has become clear that (within field), labs can mostly do their own peer review.

It is not clear what we can do about science outside field. As a scientist how can I know whether to believe something outside my immediate field. And how should we control what journalists, politicians, and clinicians trust, given that they do not have the training to do their own "in-lab" peer review.

Nevertheless, importantly, now that we have preprint servers and can compare pre- and post-peer review, it is pretty clear that peer review isn't doing much, which gives us the ability to say that the costs (excessive publisher profits, reviewer time costs, etc) are not worth the gains.

  • Yes, I know, arXiv has been around for many many years. But people somehow thought biology, psychology, and the other non-physics fields were different. ¯_(ツ)_/¯
neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

What's missing to replace our current publishing system? What technical and social components do we need to build? My first suggestions below, but I'd like to hear feedback from others.

  • An easy (frictionless) and flexible way to edit and submit documents that can be permanently referenced and that you feel confident will stay accessible forever
  • An easy and semantically rich way to link between these documents (e.g. document A is a review of document B)
  • A way to view these documents that surfaces and highlights relevant related content (e.g. listing and summarising reviews, comments, related papers)
  • A way to automatically convert documents into any standard format (HTML, Word, LaTeX, PDF, ...) so that the system can co-exist with existing workflows (the legacy journal system for example)
  • A database storing all this data that isn't owned by a single institution, either commercial or public, but that is distributed or duplicated across all the universities and libraries of the world. A way for these research institutions to democratically decide which organisations can submit data into the database.

Edited to add: not interested in a conversation about whether or not we need the existing publishing industry. That argument is settled for me and the question I'm interested in in this thread is how to change things assuming we want to.

adredish,
@adredish@neuromatch.social avatar

@tdverstynen @neuralreckoning @knutson_brain

Well, I think that a good theory / synthesis / review paper is also a snapshot in time. It is important not to think that theory works as a unified whole. Instead, it is an ocean of ideas and I think that the dialog is better kept separate. I want to read Newton's principia as the principia. I don't want there to be an edit of it based on quantum mechanics.

I really don't think any of this "living paper" stuff is necessary. The literature is what is living. What we need to do is keep the literature as a living thing.

Having a shared "current theory" wikipedia is a viable thing that could be useful for teaching.(Scholarpedia tried this.) But that's not the scientific literature. It is a synthesis from it. The scientific literature needs to be concrete and visible in its history.

In part we need this history because we need to be able to go back to old ideas and re-discover them. I don't want them to get lost in the "tracked revisions of the living document".

adredish,
@adredish@neuromatch.social avatar

@tdverstynen @knutson_brain @neuralreckoning

I really don't like the idea of a "living paper". I want a paper to be a stable isolated event in time that we can use as a solid brick in our construction of this scientific cathedral.

We need to stop seeing a paper as the whole story. A paper is one small piece of the puzzle. We need to see the literature as the whole. (This is why we also need "theory papers" and "synthesis papers"*, because they integrate those bricks into new constructions we can use.)

The reason the journal paper was invented in the first place (in the 1600s) was so that we could communicate part of the picture. Before that, people spent decades writing monographs, worrying about getting scooped, and they couldn't build on each other's work.

A paper is a moment, a small discovery that other people should know about.

It's the literature that is the living document we are building together.

  • I'm not fond of the term "review papers", "synthesis paper" would be a better term. A good review paper provides a new synthesis perspective and isn't just a list of the relevant papers to cite.
adredish,
@adredish@neuromatch.social avatar

@neuralreckoning And whither peer review and gatekeeping?

It probably doesn't matter for you, or me, or @elduvelle or @knutson_brain or many of the folks on this mastodon community, but I wouldn't want ol' Elon to be able to publish willy-nilly.

[Edit: to be clear - I meant it probably doesn't matter for these serious scientists judging each others work. This is why bioRxiv seems to be OK for within-science papers.]

Even SFN has publication checks to make sure that big pharma wasn't using their conference to claim secret drug effects without review.

How is your current suggestion different from the ..rxiv groups (arXiv, bioRxiv, medRxiv, psyRxiv, etc)?

adredish, to random
@adredish@neuromatch.social avatar

Are there better preprint servers than #bioRxiv? Every time I try to post something to #bioRxiv, I have to fight with them that a paper doing new analyses on real data is a "real paper" just because we also do a good job of placing our results in the literature.

Moreover, #bioRxiv has an explicit policy that new theoretical insights are "not suitable for posting" (meaning they don't think theory is a real contribution to the literature), which is bad for science. What are the better other options?

It's almost (almost) as bad as fighting with editorial desk-rejections at a real journal.

#neuroscience #preprints #sciencepublishing #theory #experiment

adredish, to random
@adredish@neuromatch.social avatar

Question for neurophysiologists. Anyone who uses the Intan recording system. How do you sync up other signals (like video recordings) with Intan's neurophys? We have a hacked together system that works but I'd really like to clean it up. What do others do?

@elduvelle @hugospiers @katejjeffery

jonny, to random
@jonny@neuromatch.social avatar

Too many ppl taking the wrong lessons from the three-balled rat "AI" generated paper. The problem isnt "predatory publishers" or that one person didnt peer review hard enough. The problem is that we actively construct a system where companies make billions of dollars selling prestige vouchers academics have to buy to survive. If the for-profit journals didnt make the system a game, there would be nothing to play.

Frontiers is a symptom. Science, Cell, Nature, and the prestige treadmill they exploit us with is the disease.

adredish,
@adredish@neuromatch.social avatar

@jonny @tdverstynen @knutson_brain

@jonny: Agree 100%. It's wild because the original Frontiers was a solution to the prestige treadmill system. (@neuralreckoning : I think the original Frontiers systems identified below are both wonderful ideas we should try to get back. (ideas for your new list))

When Frontiers started, it had a host of important changes that were really good. Then it got bought up by the prestige publishers and they are basically all gone now.

  1. A new peer review process that provided gatekeeping without prevention. The peer reviewers "endorsed" a publication. You could reject a publication, which said "I refuse to put my name on this", provided (private) feedback to the authors and your name never appeared. (Authors never saw that reviewer's name either.) But another person could endorse the paper.

What this meant was that you could tell the quality of a paper by who was willing to endorse it. You could use that endorsement to judge whether you wanted to trust the paper.

In practice, papers got rejected when none of the reviewers in the field were willing to endorse the paper. So papers could get effectively rejected when the field didn't approve of it (no one would endorse it), but no one person could gate-keep a paper.

  1. Papers moved UP not DOWN.

The first publication was in a field specific journal (that was generally low impact) and was judged on scientific merit, not impactfulness. Then, based on actual metrics (like downloads, votes, etc), the authors would be invited to write an article for a second-tier journal (less field specific, more general). There was supposed to be third-tier journals, but I don't think they ever appeared. When Nature bought Frontiers, I tried to convince the editors at Nature to create one slot for a paper in Nature so that it could serve as a fourth-tier journal, but they weren't having any of it.

In those early days, we had a lot of success with those two-tier Frontiers systems.

adredish, to random
@adredish@neuromatch.social avatar

It's really unfortunate that reviews are seen as competitive and confrontational. I want my reviews to be suggestions to help the authors.

Because of that, I don't want the reviews I write made public. Reviews are for the editor to assess the author's work and for the author to make the work better. They are not commentary for the public to understand the work.

Fixing the paper is the author's job. Not mine. I want to be able to say "There was a paper a long time ago by someone like XYZ who you should go read." Or "There's a whole literature here by people like ABC to go engage with." Finding that literature is the author's job. Not mine. I want to be able to say "what if X is not linear?" or "what if Y happens?". I don't want to have to work out the math myself. That's the author's job. Not mine. None of this should be public.

Moreover, I need a process to say "I have the following concerns with my own review. I have this knowledge and not that. So you can trust me here, but not there." (Which deanonymizes the review, BTW.) I need a process where I can say to the editor, I'm concerned about A, but it's not my knowledge-base, or I think they should fix B, but it's not that important if they don't. Deciding what of that to communicate to the authors is the editor's job. Not mine.

Given the current ecosystem of publishing reviews and of preventing me from providing context to the editors separate from information sent to the authors, I am very seriously considering turning down all review requests from now on. (But it seems unfair to add publications to the ecosystem while not doing my part to review them.)

😠​ Frustrated.

brembs, to random
@brembs@mastodon.social avatar

This otherwise sensible proposition is still mired in 17th century thinking. In any sensible, modern publishing system, "replicated" would be a stage any publication would go through, among, e.g., "peer reviewed", "open data/code verified", "cited", etc.

"Peer-replication model aims to address science’s ‘reproducibility crisis’"

https://www.nature.com/articles/d41586-024-00796-0

adredish,
@adredish@neuromatch.social avatar

@brembs

There is no such thing as replication. There is only continued exploration of the reality we live in.

Every experiment is unique.

"Replication" is just a poor word for "Testing if this still happens under slightly different conditions".

Crazy take: There is no "reproducibility crisis". Some fields just have over-interpreted some of their results.

IMO, people spend way too much time thinking of papers as "answers" rather than as one very small piece of a large scientific literature that we are putting together.

adredish, to random
@adredish@neuromatch.social avatar

This wordcloud stuff is fun.
https://shiny.rcg.sfu.ca/u/rdmorin/scholar_googler3/

Yeah, OK, I buy this one for me: 😀​

elduvelle, to random
@elduvelle@neuromatch.social avatar

Here are some very interesting suggestions for having a good IT system in your lab (Github, Wiki, website, emails etc.). I’m sure the Mastodon crowd will love these:

https://fraserlab.com/2024/04/22/IT-suggestions-for-new-faculty/

Source: future PI slack, from the

adredish,
@adredish@neuromatch.social avatar

@elduvelle

TBH, I don't know. I think yearly review would be a good technique to try.

What I have done is added things over the years. It's not so hard when it's an entirely new thing. For example, we added a Wiki about a decade ago. That worked really well. I recently added checks to the lab database so that new stuff at least is in a consistent format... for a while. In my experience, the problem is less one of adding as much as deciding when to retire something or deciding whether it is worth fixing legacy structures.

adredish,
@adredish@neuromatch.social avatar

@elduvelle

These are good ideas.

Just a warning, though. When you build your lab, you will have a great IT system. It will be elegantly designed, and will be light-years ahead of your PI's structure. And you will wonder how they ever got along without it... until 25 years later, you realize your IT system is now a hodgepodge of duct tape and out-of-date systems that are not nearly as good as the new faculty are designing, and you will realize that updating it would require taking the entire system offline for more than a year and none of your postdocs would accept the new structure...

Of course, that doesn't mean you shouldn't do your best to set things up as carefully as you can when you start. Just a view from down the road. :)

adredish, to random
@adredish@neuromatch.social avatar

An interesting consequence of the hypothesis that human social structures are built on "assurance" or coordination games rather than prisoner's dilemmas:

Coordination games have two stable states. If you are living in a world where everyone else is cooperating, it is in your best interests to cooperate as well. If you are living in a world of cheaters, cooperation is for suckers.

This means your perception of your community has a big impact on your own behavior.

While I agree that we do need things like @deevybee 's defense against the dark arts and @PubPeer and the enforced "share the data as is" regulations that @BorisBarbour has been talking about, I think we also need to make sure that we CELEBRATE openness, integrity, and we make sure that we report it to the world. We do not want all of our news reports to be about fraud.

So, can I recommend a policy? For every fraud that gets reported, find a positive success to talk about. I guarantee they are out there. In fact, I bet they are so common, we don't notice them.

All the people who share their code and fix the bugs that others find. The labs that say "come on by and we'll show you how we do stuff". The people who work with others to make their data useful and not just "out there". There are lots and lots of these positive examples. I worry they get lost because they are so common. We need a hashtag for celebratory cooperation in the sciences. I'm open to suggestions.

PS. For those who don't know it, the coordination game is structured so that for player A (given player B choice): C(C) > D(C) > D(D) > C(D), as compared to the prisoner's dilemma which is: D(C) > C(C) > D(D) > C(D).

In the coordination game, it is best to do what the other player is doing. In the prisoner's dilemma it is best to defect. There are n-player extensions of this as well.

adredish, to random
@adredish@neuromatch.social avatar

Interesting: eLife is following through on their "publish with reviews that say the paper has serious problems". To be fair, this is exactly what eLife said they would do. So, props for that.

https://elifesciences.org/reviewed-preprints/89106#tab-content

I was interested to see how this would shake out, particularly in a field that I am not (at all) an expert in, so I went reading....

From a "I have infinite time and I like to read stuff and I'm well-trained in critical thinking", this is kinda fun. From a "I have to judge this person along with the 50 others applying for this job" or "I have to judge the biosketch along with the other dozen grants I have to review." position, I'm far less sanguine about this. It definitely means that we cannot use eLife as a simple published gate keeper anymore. I'm not sure how I will take eLife papers that I see on CVs.

On the other hand, after the last study section I was on, where some reviewers took everything listed as a "paper" (including middle author conference presentations) and some reviewers didn't count anything that wasn't a first-author product in a recognizable journal, it is clear that our metrics are really broken anyway.

(BTW, I got to this paper from science twitter, which seems to be still working well for me, which is also an interesting data point.)

adredish, to random
@adredish@neuromatch.social avatar

@brembs @knutson_brain
@neuralreckoning

A question that came up in a recent (in-person) discussion: Has anyone compared the reproducibility or the validity of bioRxiv preprints with published journal articles? Are preprints less reliable than peer reviewed journal articles? It would seem we have enough examples now to check this.

adredish, to random
@adredish@neuromatch.social avatar

I'm now at also at

@adredish.bsky.social

on the 🟦☁️ app.

adredish, to random
@adredish@neuromatch.social avatar

New updated preprint available. https://arxiv.org/abs/2306.03162. Led by the amazing Ugurcan Mugan. How different navigation theories interact with spatial complexity, and then tested on a hippocampal and dorsal striatal dataset.

adredish, to random
@adredish@neuromatch.social avatar

needs to decide if it is a journal or a conference. 😡​ They ask for 2-page "abstracts", but then review the 2 pages as if they are a full paper. 2/3 reviews we got back included statements of "I don't understand why they didn't do X", where X was (a) Figure 5 of a 10-page paper ("abstract" included figure 1-3) or (b) a 3-year $500k project beyond the actual submission. The one good review was for a project that could be completely described in 2 pages. That's not an abstract.

adredish, to random
@adredish@neuromatch.social avatar

Announcing a new preprint (bioRxiv)!

G. W. Diehl, A. D. Redish (2024) Measuring excitation-inhibition
balance through spectral components of local field potentials.
bioRxiv unreviewed preprint.

https://www.biorxiv.org/content/10.1101/2024.01.24.577086v1

We directly test the relationship between local field potentials and synaptic efficacy in vivo from awake behaving rats.

adredish, to random
@adredish@neuromatch.social avatar

The issue for me is that there are three goals to review. In my view, they are being confounded in this new system.

  1. A communication to the editor to help them make the decision whether to put the paper in their journal or not. (Remember, this is the editor's choice, not the reviewer's. I have lots of examples of mismatches if anyone thinks otherwise - in both directions.)*
  • Certainly, one can argue that we shouldn't be using editor-limited journals for scientific discourse. That's a different discussion.
  1. Helpful guidance to an author to make the paper better.**

**Yes, I admit, I am an artist and thus a perfectionist when I produce public products - I do not put out partial products. This means that I am not about to say "I have a vague memory of X" without going to the extensive effort of checking that my memory of X is right and finding the proof (which could be a lot of work). But I am very happy to say to the author "Check X" and to the editor "If X is wrong, don't worry about it.". My reviews are not public products.

  1. Commentary for the public so that people can see the concerns that need to be considered when appreciating the paper.

I feel very strongly that 1 and 2 should be private, but 3 should be public. 1 and 2 are the purpose of review. 3 is the purpose of commentary. They are not the same.

PS. The reason for my initial post was because it turns out that a journal I agreed to review for does not have any mechanism to communicate to the editor separately. They also make the full reviews public (if the paper author wants, which seems unfair - it should at least be at the decision of both author of the paper and author of the review).

adredish, to random
@adredish@neuromatch.social avatar

@tdverstynen @richardsever

A fascinating article on the history of scientific publication. Thanks for sharing!

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002234

adredish, to science
@adredish@neuromatch.social avatar

For the one-year anniversary of the audiobook version of CHANGING HOW WE CHOOSE (The new science of morality) audiobooks is having a sale until 30/April.

https://www.audiobooks.com/promotions/promotedBook/646609/changing-how-we-choose-the-new-science-of-morality?refId=128804

adredish, to random
@adredish@neuromatch.social avatar

I don't normally post political things on social media, but there is so much discussion of negative interactions and I want to show an example of a positive solution.

The University of Minnesota has reached a negotiated agreemennt with campus protesters through mutually respectful dialog with the protest leaders. The deal apparently includes agreements about opportunities to communicate divestment plans to the regents, discussions of ways to provide opportunities for advocacy to other students without limiting other students' individual opportunities, discussions to look into collaboration with Palestinian universities and an examination of current cross-university agreements, fully disclosing university expenditures, as well as amnesty and leniency. The protesters have agreed to remove the encampment and to not disrupt commencement ceremonies or finals. University regents and the university police have both agreed to abide by the agreement as well.

Now let's see how much the media report this as compared to the negative interactions at other schools.

https://z.umn.edu/05012024message

adredish, to random
@adredish@neuromatch.social avatar

New paper published: new study of decision making and psychosis. Drift diffusion models, attractor dynamics, and patients with psychosis doing the dot pattern expectancy task. Fascinating work by the team.

https://academic.oup.com/schizophreniabulletin/advance-article/doi/10.1093/schbul/sbae014/7614300

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • GTA5RPClips
  • DreamBathrooms
  • cubers
  • mdbf
  • everett
  • magazineikmin
  • Durango
  • Youngstown
  • rosin
  • slotface
  • modclub
  • kavyap
  • ethstaker
  • megavids
  • ngwrru68w68
  • thenastyranch
  • cisconetworking
  • khanakhh
  • osvaldo12
  • InstantRegret
  • Leos
  • tester
  • tacticalgear
  • normalnudes
  • provamag3
  • anitta
  • lostlight
  • All magazines