Posts

This profile is from a federated server and may be incomplete. Browse more on the original instance.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

OK, let's try something new. I'm not well connected because I'm bad at in person networking, and this is compounded by my decision to stop flying to conferences. So, can I use mastodon to find potential experimental colleagues who would like to work together?

Ideally for me, this would be people in Europe so I can visit by train, but it's not essential. I have some ideas for interesting projects and grant applications, and I'd love to develop those into concrete projects in close participation with experimental colleagues.

One of the main themes I'm interested in is how we can relate various neural mechanisms (e.g. inhibition, recurrence, nonlinear responses) to functions, using computational modelling to ask 'what if' questions that couldn't be answered by experiments alone.

I'm also interested in thinking about how we can use "information bottleneck" ideas to think more clearly about what computations networks of neurons are doing, going the next step beyond representing information to computing / discarding information.

A big question I'd like to answer is to find out how different brain regions work together in such a flexible and scalable way.

A technique I'm very excited about at the moment is using modern ML algorithms to train spiking neural networks at cognitively challenging tasks, making them directly comparable to both psychophysical and electrophysiological data.

Part of that could involve building in new mechanisms, like dendritic structure or neuromodulators into those networks and allowing the trained networks to make use of them in the best way possible.

I'd also love to build jointly motivated experimental and theoretical/synthetic datasets to test models against.

If any of that sounds interesting to you, take a look at some of my recent papers and get in touch. I'd love to hear from you.

http://neural-reckoning.org/publications.html

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
Dan I would love to find some way to work on this with you. I have been missing being in the world of neuroscience and am away from a wet lab, but would love to be able to pitch in what I can. I have always wanted to train large spiking nets, and am particularly interested in the role of inhibition in temporal computation, and representations of information that are not simply vectors of short term average firing rate but dynamical embeddings that would be badly captured by atemporal ANNs (eg. I have been loving @mschottdorf 's work).

SPECIFICALLY if youre talking about building sick spiking neural net models that are modeled down to the ion channel and are incorporating spatial information like 3d embedding and neuromodulation I would LOVE to help write code for that.

Obviously I cant make it a full time gig but would love to be in the extended universe of collaborators pitching in on a longterm project.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @mschottdorf I'll definitely be doing some stuff like what you suggest over the next few years. Will be in touch!

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

In Marseille, giving a talk in 45m arguing that modelling is essential if we want to understand neural function, and presenting some of our recent research. You can also join by zoom:

https://conect-int.github.io/talk/2024-05-17-int-conect-seminar-by-dan-goodman/

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

I love markdown, particularly @mystmarkdown, but... it's not clear to me that the specification will end up much simpler than LaTeX in the end, and has the downside of less tooling and less standardisation. There's so many flavours of markdown now that it's hard to find correct information. Thoughts? Maybe something @rowan and @choldgraf would be interested in weighing in on.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Sometimes I regret staying up until 2am, but not tonight. Saw the northern lights visible by eye in London. Never thought that would be possible. Didn't quite look as good by eye as this 6s exposure, but was still amazing. Fun atmosphere on parliament hill too.

elduvelle,
@elduvelle@neuromatch.social avatar
neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

UX peeve. Lamps that you have to tap repeatedly to adjust brightness so that if you want it to get less bright you have to cycle through more bright first. Bring back clunky analogue switches. Touch interface is bad for everything except a phone.

elduvelle,
@elduvelle@neuromatch.social avatar

@neuralreckoning Can I make the same complain? Really poor design

BorisBarbour,
@BorisBarbour@mastodon.social avatar

@neuralreckoning @elduvelle

Totally. But how to move away from purchase price as the main differentiator?

Only vaguely related, but I was day-dreaming that in a move to improve product quality the EU imposed a standard guarantee of 5 years instead of 2.

neuralreckoning, to science
@neuralreckoning@neuromatch.social avatar

Thought about hypothesis testing as an approach to doing science. Not sure if new, would be interested if it's already been discussed. Basically, hypothesis testing is inefficient because you can only get 1 bit of information per experiment at most.

In practice, much less on average. If the hypothesis is not rejected you get close to 0 bits, and if it is rejected it's not even 1 bit because there's a chance the experiment is wrong.

One way to think about this is error signals. In machine learning we do much better if we can have a gradient than just a correct/false signal. How do you design science to maximise the information content of the error signal?

In modelling I think you can partly do that by conducting detailed parameters sweeps and model comparisons. More generally, I think you want to maximise the gain in "understanding" the model behaviour, in some sense.

This is very different to using a model to fit existing data (0 bits per study) or make a prediction (at most 1 bit per model+experiment). I think it might be more compatible with thinking of modelling as conceptual play.

I feel like both experimentalists and modellers do this when given the freedom to do so, but when they impose a particular philosophy of hypothesis testing on each other (grant and publication review), this gets lost.

Incidentally this is also exactly the problem with our traditional publication system that only gives you 1 bit of information about a paper (that it was accepted), rather than giving a richer, open system of peer feedback.

lana,
@lana@mstdn.science avatar

@neuralreckoning 2 proposals:

  1. Choose hypotheses where both possibilities are exciting (i feel like a lot of null hypotheses are borderline stupid, and that makes them bad null hypotheses. If you look at things like engineering in space, when something goes wrong in a spacecraft and you only have a 2 min window of communication with 2 days to get an answer... people find way to maximize the info out of their null hypotheses)
  2. Use Taguchi arrays
    https://youtu.be/5oULEuOoRd0?si=CZnpXkgwpuPphN5Y
jonny,
@jonny@neuromatch.social avatar

@neuralreckoning richly discussed in psychology and philosophy of science
here's a more recent example: https://doi.org/10.1177/1745691620966795
and a classic: https://www.jstor.org/stable/186099

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Three grant rejections in one week. Sigh.

elduvelle,
@elduvelle@neuromatch.social avatar

@neuralreckoning 😢 I got my first fellowship application rejected! Solidarity 🤗

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle oh that sucks, sorry. 😞

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Now seems hard to deny that the main risk for universities is the actions of those at the very top. Not just in the USA btw. So how do we address this and reclaim our universities?

neuralreckoning, to science
@neuralreckoning@neuromatch.social avatar

Suppose you were a funder wanting to design a system to fund science projects that were bottom up rather than top down. How would you do it?

I think you'd want to restrict it to non-faculty to start with, and have some sort of consensus-building rather than competitive approach. Like, maybe you could have an initial round where people proposed ideas, followed by a second round where people indicated who they'd be willing to work with and which aspects of their ideas they'd be willing to drop or modify in order to build consensus. Possibly you might need multiple rounds like this until you iterated on a solution that worked.

Would there by problematic hidden power dynamics in an approach like that? I guess so, there always are. But maybe still better than top down approach?

And is there any chance of finding a funder who would be willing to experiment with such an idea? Or any existing examples of experiments like that? Or more generally, examples of funders taking a non-competitive approach?

albertcardona, (edited )
@albertcardona@mathstodon.xyz avatar

@neuralreckoning

To non-faculty for sure. My first move would be to expand funding for PhD students: attract many, and with a good salary to bias the choice away from industry.

It's so cheap to support research work that may very well end up saving millions across the board, e.g., software to name just one close to me: https://albert.rierol.net/tell/20160601_Unintended_consequences_of_untimely_research.html

neuralreckoning, to academia
@neuralreckoning@neuromatch.social avatar

So oral exam at end of PhD. Good idea or just a tradition that doesn't make any sense any more? What are the good things about them? If we didn't do them, how else could we get those good things?

jonny,
@jonny@neuromatch.social avatar

@kofanchen @neuralreckoning @nicolaromano @steveroyle right, like really the '4 people only read PhD theses' isn't a fault of the thesis, but the dissemination media. do you know how you would find a PhD thesis to read? because i don't.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @kofanchen @nicolaromano @steveroyle they occasionally pop up but they're often stuck in hard to know about institutional repositories.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Thread viewer for Mastodon. If you're looking for an easier way to navigate deep threads with many contributions, check out "mastodon thread viewer" (early beta for the moment).

https://thesamovar.github.io/masto-thread-view/

The way it works is you bookmark one of the view types on that page, and then when you're viewing a post from a mastodon thread in your browser, simply click your bookmark and it will open a new tab with the page you're currently viewing rendered as a thread (either tree or table view).

It's early days so there may be bugs, etc., but I think it's already useful. Please give feedback on bugs/feature requests either here or via issues at https://github.com/thesamovar/masto-thread-view.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

So what would it take to publish a paper here on mastodon and do public peer review? Just an agreement to use a few hashtags like , and in replies things like , , , ? Some automatically generated web and pdf output summarising the thread? Submission to something like Zenodo to give a DOI? Linking user accounts to orcid to verify identity? Only real problem I see is that even with markdown and LaTeX, Mastodon posts are not well suited for longer posts with multiple figures etc. Maybe fine for short results though?

maegul,
@maegul@hachyderm.io avatar

@neuralreckoning

I'd use groups or threadiverse communities (like sub-reddits).

One for peer review, and another for "publication" which could require a link to the peer review discussion tree .

On posting to "publication" some automated process ensues for the rest. There formatting can be richer or blog-like too, with comments/discussion trees for both stages too.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@neuralreckoning

Perhaps software publishing requires a new application that happens to implement ActivityPub but isn't necessarily Mastodon.

What you are describing reminds me of @joss https://github.com/openjournals/joss

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Early prototype of Mastodon thread viewer:

https://thesamovar.github.io/masto-thread-view/test.html

Just paste the URL of the thread into the box at the top and hit the "linear thread view" button below and it will give you a view of the thread with hierarchical replies sorted by how many engagements they got (reposts + favourites + replies).

It's very early days so it doesn't yet show any images, the design is not ideal, not optimised for mobile, etc. But I already find this useful for getting a feel of big threads.

My aim here is to give people a better way to navigate overwhelmingly large threads and to allow for a sort of archive of interesting threads. If we want to make Mastodon into a viable option for having scientific debates (e.g. alternative to peer review), we need some way to make them more accessible to outsiders and to surface the most interesting and relevant content.

So I'm particularly interested in hearing suggestions for features or other ideas on how to display threads in the context of long lasting discussions with some permanence to them.

At the moment it's just a very simple idea, but I have other ideas for how to display threads that are a bit wackier and I'll add these as extra buttons as and when I work on this. I'm also going to see how feasible it is to make this into a bookmarklet so you can just hit the 'render thread' bookmark in your browser and open a tab with this. Should be straightforward.

If you're interested, please feel free to post suggestions and issues either here or on github: https://github.com/thesamovar/masto-thread-view

May be of interest to @NicoleCRust @jonny

jni,
@jni@fosstodon.org avatar

@jonny @neuralreckoning I’d prefer if most work was client-side or upstream. Per-instance features are kind of annoying, eg coming across mathstodon posts where LaTeX won’t render unless I view them on mathstodon directly. The more the instances diverge, the harder it is for new folks to decide what to join.

Obviously, instances are a great place to experiment. But I just don’t want it to be the endgame…

jonny,
@jonny@neuromatch.social avatar

@jni
@neuralreckoning
I think the instances being different from one another is really good! One of the strengths of the fedi. If we prioritized sameness for the ease of joining then wed just all be using base mastodon. Pulling code upstream to base masto is notoriously difficult and slow. IMO People should explore different instances, and we should be better about signposting their differences and swapping code horizontally. We try our best over here: https://wiki.neuromatch.social/Mastodon/Hacking

In this case I dont see the need to make it either/or - we can embed something that also can be used as one would a website (going to a website == using this tool as is == viewing a post on another instance)

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

My university will hire a couple of undergrads to help turn my neuroscience course https://neuro4ml.github.io/ into an interactive textbook along the lines of @neuromatch academy https://compneuro.neuromatch.io. I'd like to try a little more though.

I'd like to write some extensions to JupyterBook so that I don't have to maintain separate slides but have everything integrated into one structure, including video recordings, so that you can be watching like a lecture, pause and you're already at the code you can run, etc.

Anyone seen anything like this done before? Any tips? Any thoughts on how to do it? Features it should have? Be ambitious! In my head I'm calling this "textbook of the future" just to give you an idea of how grandiose you should be. 😉 Cc @choldgraf @rowancockett

gvwilson,
@gvwilson@mastodon.social avatar

@mstimberg @neuralreckoning I was briefly excited by a tool called scrimba, which would record and replay text/terminal sessions with voiceover (and which allowed you to checkpoint, experiment, and restart, which was awesome). Unfortunately, it looks like the core technology is no longer open source: scrimba.com is now a "learn to code" site using the tech, and the tool itself appears to have vanished. 1/

gvwilson,
@gvwilson@mastodon.social avatar

@mstimberg @neuralreckoning However, if your friends in the CS department have a grad student looking for a thesis project, adding replay to something like the jupyter or marimo notebook would be a few months of coding and a few months of user studies to produce a couple or three papers that I think would be very widely read. just sayin'... :-) 2/2

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • rosin
  • InstantRegret
  • mdbf
  • osvaldo12
  • Durango
  • DreamBathrooms
  • everett
  • magazineikmin
  • thenastyranch
  • ngwrru68w68
  • Youngstown
  • slotface
  • kavyap
  • JUstTest
  • khanakhh
  • Leos
  • cisconetworking
  • tacticalgear
  • modclub
  • GTA5RPClips
  • cubers
  • ethstaker
  • anitta
  • megavids
  • normalnudes
  • tester
  • lostlight
  • All magazines