@adredish@neuromatch.social avatar

adredish

@adredish@neuromatch.social

Scientist studying learning, memory, and decision-making. Poet and Playwright.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

adredish, to random
@adredish@neuromatch.social avatar

Looking forward to next week!

We've got four posters, scattered across the days.

  1. Saturday PM VV (double V) 20: Clarifying proactive and reactive cognitive control deficits in psychosis via drift diffusion modeling

  2. Sunday PM NN 12: Probing the role of prefrontal cortex, hippocampus, and dorsolateral striatum on decision-making in complex spatial environments

  3. Sunday PM OO6: Hippocampal Representations in rats worried about being attacked by a threatening robot in the “robogator” approach-avoidance task

  4. Tuesday PM TT9 Network physiology metrics reveal a dynamic balance between excitatory and inhibitory functional connections in rodent prefrontal cortex during decision making

Come check 'em out!

I love . It's my favorite meeting. I'm looking forward to seeing both old and new friends on the poster floor!

knutson_brain, to random
@knutson_brain@sfba.social avatar

Wise advice relevant to the of academic social media: https://thesiswhisperer.com/2023/07/10/academicenshittification/

adredish,
@adredish@neuromatch.social avatar

@elduvelle @knutson_brain @thesiswhisperer

I really do miss the algorithm. I'm just not social enough without it.

adredish, to random
@adredish@neuromatch.social avatar

I'm at the conference in Vancouver this weekend. Anyone attending, come find me and let's talk science!

jpeelle, to random
@jpeelle@neuromatch.social avatar

not discussed 🙄

adredish,
@adredish@neuromatch.social avatar

@NicoleCRust @jpeelle

The fact that 50% of the submitted grants are "not discussed" is a testament to the problems with the grant system that are, IMHO, more serious than the problems in the journal/paper publication system. The amount of time wasted on the grant competition is terrifying and a huge inefficiency in the grant system. With papers, we can at least put the work out as preprints, and the effort is not wasted.

The "not discussed" review came from the idea that grants are a simple contest, so all you need to determine is which to fund. I got into science just as the old system was being replaced. In that old system, grants were very large (25 pages of research proposal, really!), grant review was seen like paper review and every grant was provided a set of things to change/fix (think "major revision" from peer review). There was a very real sense that if you fixed the problems, you would get funded. This meant that if you started the process 2-3 years early, you could survive on one grant. Lots and lots of labs had one grant for decades. Once they were funded, they would start the renewal process in year 3, and could reliably know whether they were going to be funded in year 5 (when the cycle ran out) or whether they needed to scramble. In practice, most labs could safely live on one grant.

But this was deemed that it "took too long to get a grant funded" (as if science works on that kind of fast timescale) and people shouldn't have to wait in line to get funded. So they nixed this.

PS. Our recent T32 (months of work) and our recent U01 (months of work) were also "not discussed". The amount of work I could have done on real science instead of writing grants is ... I can't even...

PPS. Here's an interesting suggestion of a "good enough + lottery system" .https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000065 @ct_bergstrom

adredish,
@adredish@neuromatch.social avatar

@toddhorowitz @albertcardona @AllenNeuroLab @NicoleCRust @jpeelle @ct_bergstrom

I suspect the real difficulty in switching is the "justification to congress people" who won't understand the (merit triage) + lottery system. It's much easier to say "we have a scientific scoring system to get the most meritorious science" than to say "we have a process that takes noise into account correctly and is appropriately efficient".

adredish,
@adredish@neuromatch.social avatar

@GunnarBlohm @ryanmaloney @ct_bergstrom @AllenNeuroLab @NicoleCRust @jpeelle

The problem isn't that 1 grant is not deemed enough. It's that 1 grant is not safe. If you have only 1 grant and lose it, then you lose the long-term lab knowledge in the technicians and other continuous positions. Moreover, you lose the ability to generate the data you will need to return to the grant game. Thus, you need 2 grants in alternating cycles.

However, of course, this doesn't work either because getting grants is unreliable, so you can't apply for one grant every 3 years. Instead, you have to apply for lots, hoping you hit the pseudo-lottery (created by reviewing noise).

One time, I hit all the grants I had submitted (because of risk) at once and I had too much money. It was also extremely difficult to manage. A system that provided reliable smaller rewards would be much better for everyone.

Note that this is a different problem from the wasted reviewing being done that we started with.

adredish, to random
@adredish@neuromatch.social avatar

Psychology at Minnesota has a new position available to fill an endowed chair in Clinical Psychology. Appl deadline 1 Feb 2024. Apply online via UMN https://hr.umn.edu/Jobs/Find-Job job posting ID 357597. Come be a part of the amazing neuroscience community here!

vicgrinberg, to random
@vicgrinberg@mastodon.social avatar

Well, that's a new one: did you know that states this about their peer review?

"When a manuscript is accepted for publication, the names of the reviewers who endorsed its publication appear on the published article, without exceptions."
(from: https://www.frontiersin.org/about/peer-review)

So this is a hard no on principle. (I'm not worried about myself but I am very much worried about retaliations towards younger and/or more vulnerable folks reviewing for them.)

adredish,
@adredish@neuromatch.social avatar

@vicgrinberg
@albertcardona

In , if you decide not to endorse the publication then your name is not listed. What you do is you "withdraw from the endorsement process". As such, your review is still available to the editors, authors, and other reviewers, but you are no longer part of the review process (and you remain anonymous).

In the Frontiers model, reviewers are not accepting/rejecting the paper, but rather endorsing/not the paper. By endorsing the paper, you are saying "I think this is worth publishing and I am willing to let my name be associated with it". By not endorsing it, you are saying "I'm not willing to tell people to read this, but if someone else wants to endorse it, that's their business."

It's a much better system than almost any of the others IMHO.

adredish,
@adredish@neuromatch.social avatar

@vicgrinberg @albertcardona

Everything in the world is full of power dynamics. The best we can do is create systems that help alleviate their effects on our communities.

The two big problems of peer review related to power dynamics are

(1) we need to provide the opportunity to allow for anonymous criticism so that one can call out errors without fear of retribution

(2) we need to provide the opportunity for criticism and ensuring the validity of the scientific record without gatekeeping that allows powerful people to keep data they don't like out of the record.

In my view, the Frontiers system solves both of these elegantly in ways that none of the other systems do.

(1) Allowing reviewers to "withdraw" anonymously from a review ensures that the criticism remains without identifying who the negative reviewer is.

(2) Allowing anyone to endorse the paper means that if I don't like what is being said, I cannot prevent someone else from endorsing it. On the other hand, the work that I endorse has (in a very real sense) my seal of approval and people may decide to trust it more because they trust me (or not if they don't trust me :)

(3) However, because when I withdraw, I am not preventing others from endorsing it, I am not gatekeeping the work from getting published.

And, for the record, the problem of permanent contracts is a completely different one from the issues about peer review. Permanent contract is about whether people are being paid reliably enough to live. Believe me, people with permanent contracts can be shut out of scientific process too. (So we need systems to ensure that peer review provides room for anonymous criticism without retributive gatekeeping independent of whether we solve the permanent contract problem or not.)

In my observation none of the other processes really achieves the elegance of the Frontiers system.

  • the classic editor /reviewer system has anonymous reviews, but has serious gatekeeping issues. An editor (and reviewers) can gatekeep work they don't like out of their journal.

  • publishing the reviews makes it hard to do nasty gatekeeping (good), but at the same time also makes it hard to provide anonymous criticism (bad).

  • preprints have no gatekeeping, theoretically allowing garbage to infiltrate the scientific record. (to what extent that happens is an interesting experiment we seem to be participating in)

  • the new eLife system has editors gatekeeping and then providing detailed (anonymous) criticism, but, as we've seen, this doesn't preclude journalists and others from ignoring those negative comments.

adredish,
@adredish@neuromatch.social avatar

@vicgrinberg @albertcardona

Withdrawing before endorsement is not hacking the system. This is (and was) the original plan for Frontiers. It keeps you anonymous. It was always thus. Remember reviews in this system are not published, so withdrawing is saying "I'm not willing to endorse the paper." One has to include the author name if they agree to endorse for endorsement to be valid. Otherwise, it's not an endorsement, it's a review.

It is important to remember that this is a different system than the standard review process. Instead of the reviewer accepting or rejecting the paper, this system decides to publish the paper if and only if there are two people willing to endorse the paper with their name. The decision of whether to endorse or not is 100% a choice that you have as the reviewer.

I do agree that Frontiers could do a much better job of explaining this because a lot of reviewers think they are in the standard review process, but they are not.

For all of eLife's "trying new things", Frontiers tried a new system many years ago that (in my opinion) actually works really well and I wish the other journals would use it.

adredish,
@adredish@neuromatch.social avatar

@vicgrinberg @albertcardona

The other positive thing Frontiers did that worked very well but unfortunately never caught on was that all papers start in a field-specific journal and then if they show impact, the authors get invited to write a review/follow-up (which goes through the same endorsement review process) for a more general journal. This means that the first review can be on validity and does not confound impact with validity. Papers get published and work their way up to more and more general journals instead of first getting sent to a GlamourMag and falling down the impact factor tree for years, and thus languishing unpublished for many years.

I don't know why it never caught on. It worked very well.

adredish, to random
@adredish@neuromatch.social avatar

UMN biomedical engineering has positions. This is an opportunity to join our amazing (cross-departmental, but extremely collaborative) neuroscience community here!

The Department of Biomedical Engineering (BME) at the University of Minnesota-Twin Cities Campus invites applications and nominations for full-time tenure-track or tenured positions at open rank, depending upon applicant qualifications. Candidates with demonstrated expertise and accomplishments in BME/bioengineering are invited to apply. Areas of interest include but are not limited to Cancer Bioengineering, Cardiovascular Engineering, and Neuroengineering. Within these areas, immune dynamics, systems and synthetic biology, and large data analytics are of particular interest.

http://apply.interfolio.com/131760

adredish, to random
@adredish@neuromatch.social avatar

I'm now at also at

@adredish.bsky.social

on the 🟦☁️ app.

preslavrachev, to goodnews
@preslavrachev@mastodon.social avatar

deleted_by_moderator

  • Loading...
  • adredish,
    @adredish@neuromatch.social avatar

    @preslavrachev

    Except that sequence learning is already well-known to exist in non-human animals and to not be uniquely human. We know neurophysiologically that many animals can remember and process sequences.

    I strongly suspect that this result is due to the ethology problem. The humans and bonobos are not tested in equivalent spaces with equivalent tasks. In general, the human tasks are much easier to communicate the task to (humans have this cool thing called language) and the tasks are designed to match the human ethology not the bonobo ethology.

    adredish, to random
    @adredish@neuromatch.social avatar

    @brembs

    An interesting potential publishing model. Curious what people think.

    In the way that the journal article split up the monograph in the 1600s (before = have to solve the whole big problem; after = here's my one experimental result, one theoretical contribution, you can build on it), I wonder if this will successfully break up the journal article into component pieces.

    https://www.octopus.ac/

    adredish, to random
    @adredish@neuromatch.social avatar

    For those who like audiobooks. 60% sale until 1/Sept/2023 on Changing How We Choose: The new science of morality.

    https://www.audiobooks.com/promotions/promotedBook/646609/changing-how-we-choose-the-new-science-of-morality?refId=91292

    adredish, to random
    @adredish@neuromatch.social avatar

    An fascinating article on changing the incentive structure of science from individual success to group success to incentivize indirect contributions.

    https://journals.sagepub.com/doi/10.1177/17456916231182568
    Tiokhin, L., Panchanathan, K., Smaldino, P. E., & Lakens, D. (2021). Shifting the level of selection in science.

    I think it is worth discussing, but there are dangers using multi-level selection theory because it does incentivize tribalistic structures. But there are incentivizing moral codes that can be put in place to reduce tribalism (including hierarchies and networks of groups, for example). Nevertheless, an interesting idea.

    dlevenstein, to random
    @dlevenstein@neuromatch.social avatar

    Head direction is the
    primary sensory modality


    change my mind.

    From: @AllenNeuroLab
    https://mastodon.social/@AllenNeuroLab/110803787465729591

    adredish,
    @adredish@neuromatch.social avatar

    @elduvelle @dlevenstein @AllenNeuroLab Remember that the head direction representation precedes "external" sensory signals (providing piloting information) and derives primarily from internal (proprioceptive, dead reckoning) information. This was best proved by the classic Knierim / Kudrimoti / McNaughton 1995 disorientation experiment. So sensory cues are associated with the internal signal, not the other-way 'round. (This is also true for place cells.)

    adredish,
    @adredish@neuromatch.social avatar

    @WorldImagining @elduvelle @dlevenstein @AllenNeuroLab

    The model is that the system is using those cues to provide reset when lost and correction when out of sync. That's the model from my thesis work - see Beyond the Cognitive Map, 1999, MIT Press - and seems to still hold up from the "you are here" part of the theta cycle.

    The current vicarious trial and error explanations hypothesize that it is the inverse process - using a sweep forward in the internal space to predict the expected outcome into the future (in both time and space).

    adredish, to random
    @adredish@neuromatch.social avatar

    New updated preprint available. https://arxiv.org/abs/2306.03162. Led by the amazing Ugurcan Mugan. How different navigation theories interact with spatial complexity, and then tested on a hippocampal and dorsal striatal dataset.

    adredish, to random
    @adredish@neuromatch.social avatar

    University of Minnesota psychology is hiring. Two positions are available. We have an ABSOLUTELY FANTASTIC neuroscience community here. (And these psychology positions would very much be a part of it.)

    Starke Hathaway Endowed Tenure-Track Assistant Professor in Clinical Psychology

    The Department of Psychology at the University of Minnesota-Twin Cities announces a search for an outstanding scholar to fill an endowed chair, the Starke Hathaway Endowed Assistant Professorship in Clinical Psychology. This full-time tenure-track Assistant Professor position in the Department’s Clinical Science and Psychopathology Research (CSPR) Training Program will begin in Fall 2024. The CSPR training program is nationally ranked and accredited by the American Psychological Association and by the Psychological Clinical Science Accreditation System (PCSAS). Competitive applicants will have a Ph.D. in clinical psychology and an independent program of research that is nationally or internationally recognized. The candidate’s areas of research might include, but are not limited to, the etiology, assessment, or classification of psychopathology with the objective of elucidating major psychological, social, biological or other causal mechanisms of psychopathology. Behavioral genetic, molecular genetic, psychophysiological, and neuroimaging approaches represent current areas of strength that we would be pleased to expand. We value approaches that utilize these methods to investigate mental health impacts of stressors such as inequity and prejudice. Apply online via the University of Minnesota Employment System; reference job opening ID 356528.

    Tenure-Track Assistant Professor in Behavioral/Cognitive Neuroscience

    The Department of Psychology at the University of Minnesota-Twin Cities announces a search for an outstanding scholar to fill a tenure-track assistant professor position in behavioral/cognitive neuroscience. This full-time position in the Department’s Cognitive and Brain Sciences (CAB) Training Program will begin in Fall 2024. Applications are invited from behavioral, cognitive, or computational neuroscientists using cutting edge methods to link brain and behavior in animal models. Example research topics include but are not limited to learning, motivation, emotion, higher cognition, and perception. We expect the primary appointment to be in our Cognitive and Brain Sciences area with the potential for cross-affiliation with other departmental areas. Apply online via the University of Minnesota Employment System; reference job opening ID 356526.

    NicoleCRust, to Neuroscience
    @NicoleCRust@neuromatch.social avatar

    I’ve noticed a strong alignment between those who think that the computer metaphor for the brain makes little sense and those who’ve thought about how the brain might give rise to emotion.

    As much as I love all the progress happening in NeuroAI to push our understanding of perception, memory & intelligence forward, I very much think they are right - there’s a crucial swath that doesn’t seem to fit with that agenda.

    adredish,
    @adredish@neuromatch.social avatar

    @Neurograce
    I agree 100%. There are good computational models of emotions as representational patterns of interoception and societal structure. (For example, Kristin Lindquist's beautiful Nature Reviews Neuroscience paper. https://www.nature.com/articles/s44159-022-00105-4)

    @NicoleCRust @axoaxonic

    A lot of the problem is that people confound "computational neuroscience" with "brain as a digital computer like the one on your desk". That computer model (digital hardware + software) was a theory in the 1960s to 1980s, but was disproved decades ago.* (That's good science!)

    What the computational neuroscience paradigm is really saying is that there is a process in which information is stored and transformed in ways that can be described mathematically. For example, an associative network doing pattern completion through basins of attraction is a fundamentally different computational process than a von Neumann computer. A reservoir network is a different computational process than a von Neumann computer. Reinforcement learning is a different computational process than a von Neumann computer. Yet these are all computational processes.

    • Also, the fact that the "brain as a digital computer" is wrong is why neuroscience techniques are ill-suited to understand transistor chips - the brain doesn't work that way and we know it - so neuroscience techniques are designed to understand the computations the brain does do.
    evacide, to random
    @evacide@hachyderm.io avatar

    Of course if she was being nominated to the Supreme Court and her debts were suddenly paid off by some unknown party, that would be fine: https://www.washingtonpost.com/national-security/2023/07/15/kemba-walden-nomination-cyber-director/

    adredish,
    @adredish@neuromatch.social avatar

    @evacide Only if she was a republican. (IOKIYAR)

    adredish, to random
    @adredish@neuromatch.social avatar

    Interesting: eLife is following through on their "publish with reviews that say the paper has serious problems". To be fair, this is exactly what eLife said they would do. So, props for that.

    https://elifesciences.org/reviewed-preprints/89106#tab-content

    I was interested to see how this would shake out, particularly in a field that I am not (at all) an expert in, so I went reading....

    From a "I have infinite time and I like to read stuff and I'm well-trained in critical thinking", this is kinda fun. From a "I have to judge this person along with the 50 others applying for this job" or "I have to judge the biosketch along with the other dozen grants I have to review." position, I'm far less sanguine about this. It definitely means that we cannot use eLife as a simple published gate keeper anymore. I'm not sure how I will take eLife papers that I see on CVs.

    On the other hand, after the last study section I was on, where some reviewers took everything listed as a "paper" (including middle author conference presentations) and some reviewers didn't count anything that wasn't a first-author product in a recognizable journal, it is clear that our metrics are really broken anyway.

    (BTW, I got to this paper from science twitter, which seems to be still working well for me, which is also an interesting data point.)

    adredish, to random
    @adredish@neuromatch.social avatar

    New [timely!] paper published. Looking at how the neuroscience of can affect temporal discounting and disaster planning - important for understanding for how people react to disasters.

    C. F. Runge, J. A. Johnson, E. A. Nelson, A. D. Redish (2023) A neuroscience-based analysis of impacts of disaster memory on economic valuation. Journal of Neuroscience, Psychology, and Economics 16(1):24-49.

    https://psycnet.apa.org/record/2023-28159-001

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • osvaldo12
  • ngwrru68w68
  • Durango
  • everett
  • Youngstown
  • rosin
  • slotface
  • khanakhh
  • kavyap
  • mdbf
  • thenastyranch
  • InstantRegret
  • DreamBathrooms
  • JUstTest
  • magazineikmin
  • Leos
  • ethstaker
  • cisconetworking
  • cubers
  • GTA5RPClips
  • tacticalgear
  • modclub
  • anitta
  • megavids
  • normalnudes
  • tester
  • lostlight
  • All magazines