@adredish@neuromatch.social avatar

adredish

@adredish@neuromatch.social

Scientist studying learning, memory, and decision-making. Poet and Playwright.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

adredish, to random
@adredish@neuromatch.social avatar
adredish, to random
@adredish@neuromatch.social avatar

Conference submission announcements:

What: Computational Psychiatry
When: July 16-18, 2024
Where: McNamara Alumni Center, Minneapolis MN USA

Now accepting individual abstracts and group symposium submissions. Due Feb 7, 2024.

https://www.cpconf.org/

elduvelle, (edited ) to random
@elduvelle@neuromatch.social avatar

Anyone has a pdf of this paper:
“Insight” in Rats
By Tolman & Honzik, 1930?
🙏
If so please DM me

Update: thanks all - I got it! It’s here in case anyone is interested: https://www.dropbox.com/scl/fi/fppn45kxh6d92e1wud3n2/Tolman-Honzik-1930.pdf?rlkey=ac5hbfh1kcn3p67gews7ddtyj&dl=0

adredish,
@adredish@neuromatch.social avatar

@elduvelle @albertcardona

Adobe Acrobat does OCR. Just go get the hard copy, take some pictures with your phone and voila. PDF.

elduvelle, (edited ) to mastodon
@elduvelle@neuromatch.social avatar

Do you think that is more “siloed” than ?

Accounts of personal experiences welcome - we might use them in a article…

Please boost around!

adredish,
@adredish@neuromatch.social avatar

@elduvelle

IME, Mastodon is definitely more siloed. In my experience, the twitter algorithm (pre-X) was actually shockingly good at finding stuff I wanted to see but didn't know about. (Importantly, I was very strict about not doing any politics on the site. I got my political news elsewhere.) I [a] didn't officially follow anyone, [b] only "liked" science and work-related posts, and [c] only responded to science and work-related posts. But I found that the work that I discovered on Twitter was much broader than Mastodon, where I have to follow individuals to try to create that algorithm.

adredish,
@adredish@neuromatch.social avatar

@manisha @elduvelle Thanks for the reply.

How does one follow hashtags and what are guppe groups?

NicoleCRust, to Neuroscience
@NicoleCRust@neuromatch.social avatar

My new piece for @thetransmitter. Why is treating brain dysfunction so ENORMOUSLY challenging?

Because it amounts to controlling a complex system.

Drawing from the history of weather research, I pose the question: Can it even be done? And 14 experts in complex systems chime in. Would love to hear your thoughts as well!

https://www.thetransmitter.org/systems-neuroscience/is-the-brain-uncontrollable-like-the-weather/

adredish,
@adredish@neuromatch.social avatar

@NicoleCRust @thetransmitter

But we are making progress with the weather! Weather predictions are vastly more reliable than they were a few decades ago. Even if the brain is chaotic, that doesn't mean it's not understandable. And it doesn't mean it's not manipulable (a word I like better than "controllable").

It may mean we need to think about controlling it on different levels.

Moreover, we control brains all the time. As Bill Newsome has been saying in his public talks --- There are two ways to get him to get up and run out of the room. You can either manipulate all of the neural firing and molecular states to move his muscles in the right way. Or you can tell him "your wife's on the phone and it's an emergency." Both of these are ways of controlling brains.

PhiloNeuroScie, to random
@PhiloNeuroScie@neuromatch.social avatar
adredish,
@adredish@neuromatch.social avatar

@PhiloNeuroScie @marcwhoward Do they also recommend increases in science funding and NIH R01 grant sizes?

adredish,
@adredish@neuromatch.social avatar

@marcwhoward @PhiloNeuroScie

The danger about fewer PhDs is that it hurts diversity. One factor (not the only factor) that leads to a lack of diversity is that, because of the way that the US is economically structured, many people from marginalized communities have had limited opportunities (for research experience, for quality educational experience, etc). Limiting the number of PhD slots means that it is less possible to take risks on people who have either had limited opportunities or who have stumbled sometime in their lives. I feel (very strongly) that what we need to do is make it possible for MORE people to take the risk of being a PhD student and provide better safety nets if it doesn't work out.

I'm not sure how much this translates to limited opportunities if we have fewer postdocs, but fewer postdocs definitely means less work done. I'm just saying that higher salaries with the same total budget means less work done.

I'm all for higher postdoc and graduate student (and postbac and technician and staff scientist) salaries - I just think we need to increase NIH budgets to match. There is a real problem with academia in general where we take budget cuts and say "our work is so important, we'll find a way to do it anyway".

Some data: We were spending 12% of GDP on science in 1965. We now spend 3%. The real moonshot cost $300 billion in 2020 dollars. (Compare the brain initiative $5b, cancer "moonshot" $2b).

elduvelle, to random
@elduvelle@neuromatch.social avatar

Can we please collectively agree to write references the informative way in our papers? 🙏
I mean [ThisPerson et al., 2010] instead of [5]

What is even the point of having references as numbers? The only reason is non-scientific: saving space. But it actually makes it more difficult to read or review properly - either you just don’t bother checking the refs (bad), or you spend additional time checking them (annoying). With the informative format, you can just read seamlessly if you recognize the ref and mentally check if it makes sense in context, or you can decide to spend the time looking it up if you don’t know it. Or, if you don’t care, you can just visually skip them. Come on!

adredish,
@adredish@neuromatch.social avatar

@LorenAmelang @elduvelle

Yes! DUA! [Don't use Acronyms!]

I once struggled to understand a grant that was doing these very weird motor experiments trying to get at Head Direction (HD) cells and Hippocampus (HC) in humans. Until halfway into the grant, I realized these were Huntington's Disease (HD) patients being compared to Healthy Controls (HC) at which point the grant made MUCH more sense. 🤣​🤣​🤣​

danielmk, to random
@danielmk@neuromatch.social avatar

Tips on enjoying grant writing in the link. Personally I won't take them up though. I can probably rewire my brain to enjoy sitting on a tack. At least I'm not sitting on two. And the tack is somewhat clean. And thank god I have a chair at all. But I will hold on to this little nugget of discontent. Maybe even remove the tack one way or another.

https://www.nature.com/articles/d41586-023-03871-0

adredish,
@adredish@neuromatch.social avatar

@danielmk @elduvelle

Let's not change how we individuals "enjoy writing grants". Let's create a system that does not require writing grants - or at least minimizes the difficulty. It's a shame that @Nature is using its bully-pulpit to scold individuals instead of to argue for better systems.

Distributing money by the illusion of "best science as written in a grant" is provably an inefficient, wasteful, local minimum that hurts the scientific enterprise.

elduvelle, (edited ) to Futurology
@elduvelle@neuromatch.social avatar

So is this the deal with the new @eLife system:
1- you send your preprint
2- it is reviewed (if they so choose)
3- you eventually upload a final “version of record”
OR
4- you send the reviewed preprint (+reviews) to another journal for publication

Has anyone tried that step 4? Do the “other journals” accept to publish something that’s already been reviewed by eLife?

Of course, you could do 3 instead but does having an eLife paper under the new system “count” for your CV?

Edit 1: here’s an explanation of the new eLife process: https://elifesciences.org/peer-review-process

Edit 2: clarified the link between step 3 and 4 (either/or, not both)

adredish,
@adredish@neuromatch.social avatar

@elduvelle @eLife

Wait. What is this step 4 (send to another journal)? Why would you do that? If you do that, is just a fancy . I thought the whole point was that the "version of record" in step 3 was the final published version and you listed eLife as the journal.

Is this wrong?

adredish,
@adredish@neuromatch.social avatar

@elduvelle What I would do is stop at the step but include the keywords they boldface in a subline below the entry when I list the paper in the CV. (For many promotion documents, one needs to have these sublines anyway explaining your role in the paper.) I think people will accept the sublines.

adredish,
@adredish@neuromatch.social avatar

@elduvelle YMMV but in my experience, eLife and NNsci historically have had similar impact on grant-getting and promotions. The real question is whether people are changing their opinion of with the new system. (I haven't seen it, but it is too early to tell.)

adredish, to random
@adredish@neuromatch.social avatar

Announcing that the 2nd Computational Psychiatry conference will be in Minneapolis, MN, July 16-18, 2024!

http://cpconf.org

Including a clinical tutorial from Martin Paulus, a computational tutorial from Angela Radulescu, and confirmed keynotes from Josh Gordon, Justin Baker, Roshan Cools, Damien Fair, and Anna Konova.

Save the date. Come present your work. Call for abstracts and other details to come soon.

albertcardona, to Neuroscience
@albertcardona@mathstodon.xyz avatar

Periodic reminder of what fMRI's BOLD signal is measuring, and its temporal dynamics:

"blood oxygenation level–dependent (BOLD) contrast. [...] increased signal in a voxel measured with an EPI [echo planar imaging] sequence indicates recent neuronal activity because of the relative increase in local blood oxygenation that accompanies such activity. The temporal profile of this BOLD response, known as the hemodynamic response function, looks like a bell curve with a long tail, peaking around 4 to 5 seconds after local neural activity and returning to baseline after 12 to 15 seconds."

From "Principles of Neural Science", Kandel et al. 6th edition, page 115.

No matter how fast the EPI imaging is (~100 ms), the BOLD dynamics makes GCaMP look lighting fast. Temporally deconvolving BOLD is possible, to a point, but remember its spatial resolution is measured in millimetres, whereas neuronal somas measure ~0.025 millimetres.

adredish,
@adredish@neuromatch.social avatar

@tdverstynen @NicoleCRust @albertcardona

I wish other fields (Calcium imaging comes to mind! but also electrophysiology... and head-fixed behavior... and everything else...) were as explicit about their limitations as the fMRI imagers generally are.

This is why we need THEORY. Theory is a means of bringing together the constraints from a variety of sources to create understanding.

Honestly, IMO, the biggest problem right now is the attempt to find the "perfect experiment" instead of trying to fit one's little experiment into the literature.

Now, if only we could convince scientists to actually read that literature...

albertcardona, to Neuroscience
@albertcardona@mathstodon.xyz avatar

Little known facts of the :

The original connectome was mapped from 5 different volumes, from 4 hermaphrodites and 1 male.

White et al. 1986 https://royalsocietypublishing.org/doi/10.1098/rstb.1986.0056

Brenner, without skipping a beat, stated this served his purposes just fine: to study variability in synaptic connectivity among otherwise isogenic animals – and found lots, enough to state that genetic determinism w.r.t. neural circuit configuration as understood at the time was largely hogwash.

https://albert.rierol.net/tell/20200419_Sydney_Brenner_and_the_broader_world.html

adredish,
@adredish@neuromatch.social avatar

@albertcardona

Thanks for these citations! I like that we can now stop talking about "the ". A single doesn't exist. Anymore than "a single " does. We need to start seeing species as populations with variability (this is how works! - by shifting those population distributions around).

As a practical note (coming back from ), I'm getting worried about all of the experiments that are taking some specific mouse strain, making some observation (specific connections from a small subset of cells in structure A project to this specific part of structure B), and generalizing from that to all mice (and all rats and all humans). I wonder how much the various very specific connections that are being traced with viral techniques are specific to that "genetic family" of mice and how much variation we should be expecting.

I think we are vastly underestimating the variability in these life forms we are studying.

adredish,
@adredish@neuromatch.social avatar

@albertcardona

Maybe not in the c elegans field (I wouldn't know), but the fMRI field talks of "the " ALL the time.

adredish, to random
@adredish@neuromatch.social avatar

An interesting consequence of the hypothesis that human social structures are built on "assurance" or coordination games rather than prisoner's dilemmas:

Coordination games have two stable states. If you are living in a world where everyone else is cooperating, it is in your best interests to cooperate as well. If you are living in a world of cheaters, cooperation is for suckers.

This means your perception of your community has a big impact on your own behavior.

While I agree that we do need things like @deevybee 's defense against the dark arts and @PubPeer and the enforced "share the data as is" regulations that @BorisBarbour has been talking about, I think we also need to make sure that we CELEBRATE openness, integrity, and we make sure that we report it to the world. We do not want all of our news reports to be about fraud.

So, can I recommend a policy? For every fraud that gets reported, find a positive success to talk about. I guarantee they are out there. In fact, I bet they are so common, we don't notice them.

All the people who share their code and fix the bugs that others find. The labs that say "come on by and we'll show you how we do stuff". The people who work with others to make their data useful and not just "out there". There are lots and lots of these positive examples. I worry they get lost because they are so common. We need a hashtag for celebratory cooperation in the sciences. I'm open to suggestions.

PS. For those who don't know it, the coordination game is structured so that for player A (given player B choice): C(C) > D(C) > D(D) > C(D), as compared to the prisoner's dilemma which is: D(C) > C(C) > D(D) > C(D).

In the coordination game, it is best to do what the other player is doing. In the prisoner's dilemma it is best to defect. There are n-player extensions of this as well.

adredish,
@adredish@neuromatch.social avatar

@albertcardona @BorisBarbour @deevybee @PubPeer

How does pre-registration get to this question? Or are you answering a different thread?

Please check back and re-read the post that started this thread. I'm saying something very different about social media and perception and the creation of community.

albertcardona, to academia
@albertcardona@mathstodon.xyz avatar

"To win one of these coveted positions, then, you need to do everything exactly right from your freshman year of high school onward: get good grades, garner strong recommendations, work in the right labs, publish papers in prestigious places, never make anybody mad, and never take a detour or a break." – Adam Mastroianni https://www.experimental-history.com/p/ideas-arent-getting-harder-to-find

I had less than stellar grades throughout my undergrad until the last year; I did undergrad and grad research in a fringe lab at a low-ranked university – which afforded me huge freedoms –; I published my first glamour paper after I was tenured and only because my co-authors insisted on the venue; I made many people mad by telling them what I thought instead of what they wanted to hear to the point that I had to reapply 3 years later; and I took breaks to rear children. All of this between the years 2000 and 2015.

Despite everything you may hear, there is room for being creative and getting away with it. I am not an exception, plenty of peers had similarly awkward career paths.

adredish,
@adredish@neuromatch.social avatar

@neuralreckoning @UlrikeHahn @johnntowse @albertcardona

Here's some real statistics. From NINDS.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8397261/

This is a fantastic paper and well-worth the read.

One does not need a K99 to get a faculty position. In fact, one does not need prior funding to get a faculty position.

One does not need CNS papers to get a faculty position.

Lots of people go through intermediate positions (non-traditional tracks) to get to faculty positions.

BorisBarbour, to random
@BorisBarbour@mastodon.social avatar
adredish,
@adredish@neuromatch.social avatar

@BorisBarbour @albertcardona

I think too much is made of data sharing data as a means of identifying dodgy stuff. It is very rare that one can access the full raw data (and no one is going to re-preprocess the data) and the truth is most neuroscience replicates beautifully. In my observation, the neuroscience debates (at least in the worlds I live in of behavioral neuroscience), the arguments are ones of interpretation, not fact.

If the only point of data sharing is to catch dodgy stuff, it would be much more cost effective to hire an army of people to replicate experiments. That would actually catch the dodgy stuff. Although that runs into problems of "what is replication?". (is a different species replication? is a different strain? is a different cohort with different prior experience?)

The problem is that sharing data well is a headache. It's expensive both in terms of time, money, and effort. It requires a large amount of additional information that was not used in the original experiment (because now the data might be used for something else and we better be sure it's valid for that something else). And it's not rewarded. Thus, there is no incentive to share data. Making it mandatory is just yet another regulatory mandate that makes it harder to do science.

Which would you rather I do? Share my data? Run a larger n? Do a second follow-up experiment? You get to pick one. I'm not at all sure that "sharing data" is the most important thing for science.

Do we think that catching a little bit of dodgy stuff (even assuming data sharing would do that) is worth the cost of everyone having to share their data?

I suppose you could give me a sixth year on my R01 if I share my data. That would definitely incentivize me to do so. But that's the scale I would need to incentivize me to share data well.

PS. We do share our data. All of our recent papers have the data uploaded to data share servers, but it is just the data we used to make the paper. It is not "shared well" to enable people to do lots of stuff with it.

Edit: I saw that this was part of a longer chain where data was requested and not provided. It looks like this may be due to data restrictions. (I don't know if this applies here, but it can be very tricky to share human data because it has to be de-identified, which is not always so easy.)

adredish,
@adredish@neuromatch.social avatar

@BorisBarbour @albertcardona

Thanks for the discussion. An interesting blog. Some responses.

  1. The most valuable and concrete products of the investment are the discoveries that come from it. Science is about discoveries that allow us to see things in new ways. I deeply disagree with the data/code worship that is in the current zeitgeist. (I realize that is an iconoclastic view, but it is one I hold.) Data and code are merely steps along that way. In my view, the products are discoveries that allow us better control of the world because they have identified a better understanding of mechanism and process.

Data and code are useful, sure. But they are not the the goal, nor the "product" of science.

  1. I definitely do agree that IN THEORY data sharing improves quality, efficiency, and integrity. But in practice, I haven't seen it.

Particularly, in the realm of efficiency: More often than not, data reuse is misused - it attempts to ask questions for which the experiment was not well designed. And, yes, data sharing improves quality and integrity, but it is a very very expensive thing to do right.

Following on that point, I think it is extremely disingenuous to simply demand that data be shared without addressing the costs. The number of regulatory burdens that we now face to do science because of a few (and I do believe they are very few) bad apples is making it harder and harder to do the science.

  1. I absolutely agree that it is hard to fabricate a full real fake data set. Maybe I'm naive, but I don't think most of my colleagues are fabricating data. So the question I ask is how much we are willing to spend on checking each others' data vs getting the opportunity to explore and try new experiments.

  2. I'm totally OK with saying that everyone has to put up the data
    that they are using for the paper in some central place (and we do, for the same reason that we fill out IACUC and IRB forms, because regulations require it), but we shouldn't expect that data set to be useful or easily interpretable.

And I stand by my statement that this is mostly unnecessary and just makes the science more expensive and move more slowly.

There's a fascinating discussion about these issues by Adam Mastroiani arguing that science is a "strong link" problem not a "weak link" problem. https://www.experimental-history.com/p/science-is-a-strong-link-problem

If he's right (and I think he is), we should be encouraging more opportunities for exploration.

Now, as we move from discovery to engineering to implementation, we move from a strong link to a weak-link realm. So, an RCT for some drug that is going to be brought to market and given to millions of people should absolutely be pre-registered, data-made-available, the whole nine yards. But for the initial science of it, maybe we need to reduce the burden on doing experiments.

adredish,
@adredish@neuromatch.social avatar

@BorisBarbour @neuralreckoning @albertcardona

I do think we agree here. I am very OK with "sharing data as is".

I am also very OK with "learning how to format it better with time". (Actually, I like that a lot. It's a very different sentiment than I've usually heard. I'm going to start using that argument about data sharing.)

I am a little worried about data reuse misuse, but given the argument about "strong link", if someone misuses my data without checking with me, that's on them, not me. It should all be detectable in the wash.

Some servers (like OSF) just let you put files up there as is. Other servers (like Dryad) are pains-in-the-a** about making the data accessible. (We fought with Dryad for a while about our data formats for a recent eLife paper - providing Matlab code to load them wasn't good enough because "Matlab is not free software". So we ended up putting the data on OSF.)

So, in the end, I think we do agree. Put up the data as is, include what code you've used to generate your figures (again, as is). And over time, we'll learn how to format things better.

adredish, to random
@adredish@neuromatch.social avatar

A really good paper on the nuances of "questionable research practices". Sometimes they're helpful and can actually increase reproducibility.

Reinagel, P. (2023). Is N-Hacking Ever OK? The consequences of collecting more data in pursuit of statistical significance. Plos Biology, 21(11), e3002345.

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002345#sec014

adredish, to random
@adredish@neuromatch.social avatar

All submission processes should give you a chance to check your work! I just submitted a huge LOI grant (on a super-exciting collaboration project). There were a bunch of pages, with both forward and backward arrows to go through them, until suddenly the rightward arrow came to a page that said "Thank you for submitted your proposal. We will get in touch with you after the deadline has passed." with no backward arrow to make sure I didn't type the budget in wrong. 😳​

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • modclub
  • magazineikmin
  • khanakhh
  • thenastyranch
  • osvaldo12
  • Youngstown
  • mdbf
  • slotface
  • everett
  • rosin
  • InstantRegret
  • kavyap
  • DreamBathrooms
  • provamag3
  • cubers
  • ngwrru68w68
  • tester
  • ethstaker
  • tacticalgear
  • GTA5RPClips
  • Durango
  • anitta
  • normalnudes
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines