@adredish@neuromatch.social avatar

adredish

@adredish@neuromatch.social

Scientist studying learning, memory, and decision-making. Poet and Playwright.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

adredish, to random
@adredish@neuromatch.social avatar

@brembs @knutson_brain
@neuralreckoning

A question that came up in a recent (in-person) discussion: Has anyone compared the reproducibility or the validity of bioRxiv preprints with published journal articles? Are preprints less reliable than peer reviewed journal articles? It would seem we have enough examples now to check this.

adredish, to random
@adredish@neuromatch.social avatar

I don't normally post political things on social media, but there is so much discussion of negative interactions and I want to show an example of a positive solution.

The University of Minnesota has reached a negotiated agreemennt with campus protesters through mutually respectful dialog with the protest leaders. The deal apparently includes agreements about opportunities to communicate divestment plans to the regents, discussions of ways to provide opportunities for advocacy to other students without limiting other students' individual opportunities, discussions to look into collaboration with Palestinian universities and an examination of current cross-university agreements, fully disclosing university expenditures, as well as amnesty and leniency. The protesters have agreed to remove the encampment and to not disrupt commencement ceremonies or finals. University regents and the university police have both agreed to abide by the agreement as well.

Now let's see how much the media report this as compared to the negative interactions at other schools.

https://z.umn.edu/05012024message

elduvelle, to random
@elduvelle@neuromatch.social avatar

Here are some very interesting suggestions for having a good IT system in your lab (Github, Wiki, website, emails etc.). I’m sure the Mastodon crowd will love these:

https://fraserlab.com/2024/04/22/IT-suggestions-for-new-faculty/

Source: future PI slack, from the

adredish,
@adredish@neuromatch.social avatar

@elduvelle

These are good ideas.

Just a warning, though. When you build your lab, you will have a great IT system. It will be elegantly designed, and will be light-years ahead of your PI's structure. And you will wonder how they ever got along without it... until 25 years later, you realize your IT system is now a hodgepodge of duct tape and out-of-date systems that are not nearly as good as the new faculty are designing, and you will realize that updating it would require taking the entire system offline for more than a year and none of your postdocs would accept the new structure...

Of course, that doesn't mean you shouldn't do your best to set things up as carefully as you can when you start. Just a view from down the road. :)

adredish,
@adredish@neuromatch.social avatar

@elduvelle

TBH, I don't know. I think yearly review would be a good technique to try.

What I have done is added things over the years. It's not so hard when it's an entirely new thing. For example, we added a Wiki about a decade ago. That worked really well. I recently added checks to the lab database so that new stuff at least is in a consistent format... for a while. In my experience, the problem is less one of adding as much as deciding when to retire something or deciding whether it is worth fixing legacy structures.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

Summer reading recs?!

I’m looking to compile a big pile of IRL books for summer beach reading.

Big fan of scifi, specfic, murder mysteries. Strong female characters = bonus points. Can’t deal with darkness. Nothing high brow or tedious. Nonfiction lover but that list is already bursting at the seams.

I also love long series - 3 body problem, wheel of time, foundation … those waves all have good memories attached.

If that brings anything to mind, please send the rec!!

adredish,
@adredish@neuromatch.social avatar

@NicoleCRust Have you tried Ann Leckie's Ancillary Justice trillogy? Or Arkady Martine's Memory of Empire?

adredish, to random
@adredish@neuromatch.social avatar

@tdverstynen @richardsever

A fascinating article on the history of scientific publication. Thanks for sharing!

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002234

adredish, to random
@adredish@neuromatch.social avatar

The issue for me is that there are three goals to review. In my view, they are being confounded in this new system.

  1. A communication to the editor to help them make the decision whether to put the paper in their journal or not. (Remember, this is the editor's choice, not the reviewer's. I have lots of examples of mismatches if anyone thinks otherwise - in both directions.)*
  • Certainly, one can argue that we shouldn't be using editor-limited journals for scientific discourse. That's a different discussion.
  1. Helpful guidance to an author to make the paper better.**

**Yes, I admit, I am an artist and thus a perfectionist when I produce public products - I do not put out partial products. This means that I am not about to say "I have a vague memory of X" without going to the extensive effort of checking that my memory of X is right and finding the proof (which could be a lot of work). But I am very happy to say to the author "Check X" and to the editor "If X is wrong, don't worry about it.". My reviews are not public products.

  1. Commentary for the public so that people can see the concerns that need to be considered when appreciating the paper.

I feel very strongly that 1 and 2 should be private, but 3 should be public. 1 and 2 are the purpose of review. 3 is the purpose of commentary. They are not the same.

PS. The reason for my initial post was because it turns out that a journal I agreed to review for does not have any mechanism to communicate to the editor separately. They also make the full reviews public (if the paper author wants, which seems unfair - it should at least be at the decision of both author of the paper and author of the review).

adredish, to random
@adredish@neuromatch.social avatar

It's really unfortunate that reviews are seen as competitive and confrontational. I want my reviews to be suggestions to help the authors.

Because of that, I don't want the reviews I write made public. Reviews are for the editor to assess the author's work and for the author to make the work better. They are not commentary for the public to understand the work.

Fixing the paper is the author's job. Not mine. I want to be able to say "There was a paper a long time ago by someone like XYZ who you should go read." Or "There's a whole literature here by people like ABC to go engage with." Finding that literature is the author's job. Not mine. I want to be able to say "what if X is not linear?" or "what if Y happens?". I don't want to have to work out the math myself. That's the author's job. Not mine. None of this should be public.

Moreover, I need a process to say "I have the following concerns with my own review. I have this knowledge and not that. So you can trust me here, but not there." (Which deanonymizes the review, BTW.) I need a process where I can say to the editor, I'm concerned about A, but it's not my knowledge-base, or I think they should fix B, but it's not that important if they don't. Deciding what of that to communicate to the authors is the editor's job. Not mine.

Given the current ecosystem of publishing reviews and of preventing me from providing context to the editors separate from information sent to the authors, I am very seriously considering turning down all review requests from now on. (But it seems unfair to add publications to the ecosystem while not doing my part to review them.)

😠​ Frustrated.

adredish, to science
@adredish@neuromatch.social avatar

For the one-year anniversary of the audiobook version of CHANGING HOW WE CHOOSE (The new science of morality) audiobooks is having a sale until 30/April.

https://www.audiobooks.com/promotions/promotedBook/646609/changing-how-we-choose-the-new-science-of-morality?refId=128804

brembs, to random
@brembs@mastodon.social avatar

This otherwise sensible proposition is still mired in 17th century thinking. In any sensible, modern publishing system, "replicated" would be a stage any publication would go through, among, e.g., "peer reviewed", "open data/code verified", "cited", etc.

"Peer-replication model aims to address science’s ‘reproducibility crisis’"

https://www.nature.com/articles/d41586-024-00796-0

adredish,
@adredish@neuromatch.social avatar

@brembs

There is no such thing as replication. There is only continued exploration of the reality we live in.

Every experiment is unique.

"Replication" is just a poor word for "Testing if this still happens under slightly different conditions".

Crazy take: There is no "reproducibility crisis". Some fields just have over-interpreted some of their results.

IMO, people spend way too much time thinking of papers as "answers" rather than as one very small piece of a large scientific literature that we are putting together.

BorisBarbour, to random
@BorisBarbour@mastodon.social avatar

This investigation of Ranga Dias' superconductivity publications is remarkable for multiple reasons.

https://www.nature.com/articles/d41586-024-00716-2

Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)

Thread. /1

adredish,
@adredish@neuromatch.social avatar

@brembs @albertcardona @MarkHanson @BorisBarbour
@neuralreckoning

I think a lot of this recognition that we will need to replace academic journals soon has been the recognition that bioRxiv, psyRxiv, and medRxiv have not been the disasters many thought they would be*. I think a lot of people thought that peer review was critical to the success of the enterprise, and therefore we had to put up with the journals because we needed the peer review gatekeeping. However, it has become clear that (within field), labs can mostly do their own peer review.

It is not clear what we can do about science outside field. As a scientist how can I know whether to believe something outside my immediate field. And how should we control what journalists, politicians, and clinicians trust, given that they do not have the training to do their own "in-lab" peer review.

Nevertheless, importantly, now that we have preprint servers and can compare pre- and post-peer review, it is pretty clear that peer review isn't doing much, which gives us the ability to say that the costs (excessive publisher profits, reviewer time costs, etc) are not worth the gains.

  • Yes, I know, arXiv has been around for many many years. But people somehow thought biology, psychology, and the other non-physics fields were different. ¯_(ツ)_/¯
adredish, to random
@adredish@neuromatch.social avatar

New paper published: new study of decision making and psychosis. Drift diffusion models, attractor dynamics, and patients with psychosis doing the dot pattern expectancy task. Fascinating work by the team.

https://academic.oup.com/schizophreniabulletin/advance-article/doi/10.1093/schbul/sbae014/7614300

jonny, to random
@jonny@neuromatch.social avatar

Too many ppl taking the wrong lessons from the three-balled rat "AI" generated paper. The problem isnt "predatory publishers" or that one person didnt peer review hard enough. The problem is that we actively construct a system where companies make billions of dollars selling prestige vouchers academics have to buy to survive. If the for-profit journals didnt make the system a game, there would be nothing to play.

Frontiers is a symptom. Science, Cell, Nature, and the prestige treadmill they exploit us with is the disease.

adredish,
@adredish@neuromatch.social avatar

@jonny @tdverstynen @knutson_brain

@jonny: Agree 100%. It's wild because the original Frontiers was a solution to the prestige treadmill system. (@neuralreckoning : I think the original Frontiers systems identified below are both wonderful ideas we should try to get back. (ideas for your new list))

When Frontiers started, it had a host of important changes that were really good. Then it got bought up by the prestige publishers and they are basically all gone now.

  1. A new peer review process that provided gatekeeping without prevention. The peer reviewers "endorsed" a publication. You could reject a publication, which said "I refuse to put my name on this", provided (private) feedback to the authors and your name never appeared. (Authors never saw that reviewer's name either.) But another person could endorse the paper.

What this meant was that you could tell the quality of a paper by who was willing to endorse it. You could use that endorsement to judge whether you wanted to trust the paper.

In practice, papers got rejected when none of the reviewers in the field were willing to endorse the paper. So papers could get effectively rejected when the field didn't approve of it (no one would endorse it), but no one person could gate-keep a paper.

  1. Papers moved UP not DOWN.

The first publication was in a field specific journal (that was generally low impact) and was judged on scientific merit, not impactfulness. Then, based on actual metrics (like downloads, votes, etc), the authors would be invited to write an article for a second-tier journal (less field specific, more general). There was supposed to be third-tier journals, but I don't think they ever appeared. When Nature bought Frontiers, I tried to convince the editors at Nature to create one slot for a paper in Nature so that it could serve as a fourth-tier journal, but they weren't having any of it.

In those early days, we had a lot of success with those two-tier Frontiers systems.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

What's missing to replace our current publishing system? What technical and social components do we need to build? My first suggestions below, but I'd like to hear feedback from others.

  • An easy (frictionless) and flexible way to edit and submit documents that can be permanently referenced and that you feel confident will stay accessible forever
  • An easy and semantically rich way to link between these documents (e.g. document A is a review of document B)
  • A way to view these documents that surfaces and highlights relevant related content (e.g. listing and summarising reviews, comments, related papers)
  • A way to automatically convert documents into any standard format (HTML, Word, LaTeX, PDF, ...) so that the system can co-exist with existing workflows (the legacy journal system for example)
  • A database storing all this data that isn't owned by a single institution, either commercial or public, but that is distributed or duplicated across all the universities and libraries of the world. A way for these research institutions to democratically decide which organisations can submit data into the database.

Edited to add: not interested in a conversation about whether or not we need the existing publishing industry. That argument is settled for me and the question I'm interested in in this thread is how to change things assuming we want to.

adredish,
@adredish@neuromatch.social avatar

@neuralreckoning And whither peer review and gatekeeping?

It probably doesn't matter for you, or me, or @elduvelle or @knutson_brain or many of the folks on this mastodon community, but I wouldn't want ol' Elon to be able to publish willy-nilly.

[Edit: to be clear - I meant it probably doesn't matter for these serious scientists judging each others work. This is why bioRxiv seems to be OK for within-science papers.]

Even SFN has publication checks to make sure that big pharma wasn't using their conference to claim secret drug effects without review.

How is your current suggestion different from the ..rxiv groups (arXiv, bioRxiv, medRxiv, psyRxiv, etc)?

adredish,
@adredish@neuromatch.social avatar

@tdverstynen @knutson_brain @neuralreckoning

I really don't like the idea of a "living paper". I want a paper to be a stable isolated event in time that we can use as a solid brick in our construction of this scientific cathedral.

We need to stop seeing a paper as the whole story. A paper is one small piece of the puzzle. We need to see the literature as the whole. (This is why we also need "theory papers" and "synthesis papers"*, because they integrate those bricks into new constructions we can use.)

The reason the journal paper was invented in the first place (in the 1600s) was so that we could communicate part of the picture. Before that, people spent decades writing monographs, worrying about getting scooped, and they couldn't build on each other's work.

A paper is a moment, a small discovery that other people should know about.

It's the literature that is the living document we are building together.

  • I'm not fond of the term "review papers", "synthesis paper" would be a better term. A good review paper provides a new synthesis perspective and isn't just a list of the relevant papers to cite.
adredish,
@adredish@neuromatch.social avatar

@tdverstynen @neuralreckoning @knutson_brain

Well, I think that a good theory / synthesis / review paper is also a snapshot in time. It is important not to think that theory works as a unified whole. Instead, it is an ocean of ideas and I think that the dialog is better kept separate. I want to read Newton's principia as the principia. I don't want there to be an edit of it based on quantum mechanics.

I really don't think any of this "living paper" stuff is necessary. The literature is what is living. What we need to do is keep the literature as a living thing.

Having a shared "current theory" wikipedia is a viable thing that could be useful for teaching.(Scholarpedia tried this.) But that's not the scientific literature. It is a synthesis from it. The scientific literature needs to be concrete and visible in its history.

In part we need this history because we need to be able to go back to old ideas and re-discover them. I don't want them to get lost in the "tracked revisions of the living document".

adredish, to random
@adredish@neuromatch.social avatar

This wordcloud stuff is fun.
https://shiny.rcg.sfu.ca/u/rdmorin/scholar_googler3/

Yeah, OK, I buy this one for me: 😀​

adredish,
@adredish@neuromatch.social avatar

@elduvelle I would play that game! That would be fascinating.

dahukanna, to random
@dahukanna@mastodon.social avatar

Current “AI”/statistical inference (SI) tech represented by Machine “imprinting” (It’s not continuously learning/updating, with only 1 time update), Large Language Models (LLM) & conversation “chat” interfaces are decimating cultural, social & ethical assumptions (statements taken as facts, without proof) AKA “norms”.
As a society, we’re not prepared for impact of rethinking, reconsidering & updating our current human practices & conventions with a non-human machine as a participant.

adredish,
@adredish@neuromatch.social avatar

@dahukanna

I really like that point that these current production "AI" systems (Generative AI / Large Language Models) are "imprinting" not "learning" because they are trained on a single data set and then set out into the wild to use that data without correction.

Real systems are constantly learning from experience.

adredish,
@adredish@neuromatch.social avatar

@dahukanna

Yeah, I hate the way they stole the term AI. As an old AI researcher, it's really frustrating the way that they successfully took these two huge fields (AI and ML) and narrowed them down to a single model, which is a very poor model of general intelligence.

PS. I think it is a very good model of certain human behaviors - that of BS'ing. Humans also pattern-match to spin stories that sound good and are meaningless, but to claim that's a model of "intelligence" is insane. (As a teacher, it's explicitly what we are trying to get our students NOT to do.)

adredish, to random
@adredish@neuromatch.social avatar

Announcing a new preprint (bioRxiv)!

G. W. Diehl, A. D. Redish (2024) Measuring excitation-inhibition
balance through spectral components of local field potentials.
bioRxiv unreviewed preprint.

https://www.biorxiv.org/content/10.1101/2024.01.24.577086v1

We directly test the relationship between local field potentials and synaptic efficacy in vivo from awake behaving rats.

adredish, to random
@adredish@neuromatch.social avatar

needs to decide if it is a journal or a conference. 😡​ They ask for 2-page "abstracts", but then review the 2 pages as if they are a full paper. 2/3 reviews we got back included statements of "I don't understand why they didn't do X", where X was (a) Figure 5 of a 10-page paper ("abstract" included figure 1-3) or (b) a 3-year $500k project beyond the actual submission. The one good review was for a project that could be completely described in 2 pages. That's not an abstract.

koen_hufkens, to Futurology
@koen_hufkens@mastodon.social avatar

A recent study highlights the decline in disruptive science. I think most of this is due to the intellectual poverty of a mindset of current day .

It is well documented that poor people suffer from a scarcity mindset, which erodes core cognitive functions. This mindset clouds decisions, prioritizing small short-term gains over long-term larger profits by affecting planning ability. The same applies to creativity.

@academicchatter

https://www.nature.com/articles/d41586-022-04577-5

adredish,
@adredish@neuromatch.social avatar

@koen_hufkens @academicchatter

The interpretation of the data in that "disruption" paper is deeply flawed. There is no loss of disruptive science. There is an increase in the publication of non-disruptive science. This is a GOOD thing. (Because both disruptive and non-disruptive science move our understanding forward and have important consequences for progress.)

If you look at figure 5 of this paper, you will find that the number of disruptive papers is NOT declining. Rather the number of "non-disruptive" papers has exploded.

This explosion starts in the 1950s, which is when the post-war science budget booms (NSF founded 1950, NASA founded 1958, science budget reaches 12% of GDP in 1960s) and the publishing companies realize that they can access publication money by making it possible to publish in lots of journals. The number of journals has been increasing exponentially since then.

This is a common phenomenon when it becomes possible to produce products more cheaply. We've seen this in literature, in video production, in painting and other art work, etc etc etc. When it becomes possible to mass produce lesser-quality products, the average goes down. But not because there is less of the quality, only because the distribution has expanded to include the lesser-quality products.

neuralreckoning, to philosophy
@neuralreckoning@neuromatch.social avatar

question from someone who read stuff on this years ago but nothing from . Is it generally understood to be a memory phenomenon? It seems logically that it must be (see below), but that's not the way the philosophy stuff I have read about it discussed it.

My argument is that the only thing we could be talking about if we're talking consciousness is things that have made their way into a specific memory subsystem (the ones that are accessible to our language systems), otherwise we wouldn't be able to talk about it. Similarly, anything that has made its way into that memory subsystem would also be something we were conscious of. In other words, consciousness is just the set of things that go into that subsystem.

So is consciousness just the study of some particular memory subsystem and the way it interacts with other systems like language? And if we don't understand how memory works, can we understand anything about consciousness?

adredish,
@adredish@neuromatch.social avatar

@elduvelle @neuralreckoning

Re: memory. It is definitely possible to construct the physical substrate post-learning without going through the learning itself. (In artificial neural networks, you can copy the synaptic weights. In natural neural networks, you can manipulate the synapses chemically or optogenetically.)

As a fascinating side note, dreams are real things that exist moment by moment. They are observable during the dream through fMRI. Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science, 340(6132), 639-642. https://www.science.org/doi/full/10.1126/science.1234330

RichardShaw, to random
@RichardShaw@mastodon.scot avatar

"Peer review will need to account for code as it does for all reported methods in a paper. "

Peer reviewers of code will have to be paid to do it. This wouldn't be out of line as high impact factor journals already pay some statistical reviewers.

The key limitation with sharing of code as part of methodology, is that nobody has the time to read somebody else's code.

It is easy to say something is mandatory, actually finding the funding to make it possible is the challenge.

adredish,
@adredish@neuromatch.social avatar

@RichardShaw

It would be nice if peer review accounted for "all reported methods in a paper", let alone looking at code. I would bet a box of donuts that most peer reviewers don't go through the methods with a fine tooth comb. It's just not feasible, timewise.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • osvaldo12
  • magazineikmin
  • vwfavf
  • thenastyranch
  • Youngstown
  • slotface
  • Durango
  • ngwrru68w68
  • rosin
  • kavyap
  • tsrsr
  • PowerRangers
  • DreamBathrooms
  • Leos
  • khanakhh
  • hgfsjryuu7
  • cubers
  • everett
  • cisconetworking
  • InstantRegret
  • mdbf
  • tacticalgear
  • ethstaker
  • tester
  • modclub
  • GTA5RPClips
  • anitta
  • All magazines