@neuralreckoning@neuromatch.social
@neuralreckoning@neuromatch.social avatar

neuralreckoning

@neuralreckoning@neuromatch.social

I'm a computational neuroscientist and science reformer. I'm based at Imperial College London. I like to build things and organisations, including the Brian spiking neural network simulator, Neuromatch and the SNUFA spiking neural network community.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

OK, let's try something new. I'm not well connected because I'm bad at in person networking, and this is compounded by my decision to stop flying to conferences. So, can I use mastodon to find potential experimental colleagues who would like to work together?

Ideally for me, this would be people in Europe so I can visit by train, but it's not essential. I have some ideas for interesting projects and grant applications, and I'd love to develop those into concrete projects in close participation with experimental colleagues.

One of the main themes I'm interested in is how we can relate various neural mechanisms (e.g. inhibition, recurrence, nonlinear responses) to functions, using computational modelling to ask 'what if' questions that couldn't be answered by experiments alone.

I'm also interested in thinking about how we can use "information bottleneck" ideas to think more clearly about what computations networks of neurons are doing, going the next step beyond representing information to computing / discarding information.

A big question I'd like to answer is to find out how different brain regions work together in such a flexible and scalable way.

A technique I'm very excited about at the moment is using modern ML algorithms to train spiking neural networks at cognitively challenging tasks, making them directly comparable to both psychophysical and electrophysiological data.

Part of that could involve building in new mechanisms, like dendritic structure or neuromodulators into those networks and allowing the trained networks to make use of them in the best way possible.

I'd also love to build jointly motivated experimental and theoretical/synthetic datasets to test models against.

If any of that sounds interesting to you, take a look at some of my recent papers and get in touch. I'd love to hear from you.

http://neural-reckoning.org/publications.html

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @mschottdorf I'll definitely be doing some stuff like what you suggest over the next few years. Will be in touch!

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

In Marseille, giving a talk in 45m arguing that modelling is essential if we want to understand neural function, and presenting some of our recent research. You can also join by zoom:

https://conect-int.github.io/talk/2024-05-17-int-conect-seminar-by-dan-goodman/

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

I love markdown, particularly @mystmarkdown, but... it's not clear to me that the specification will end up much simpler than LaTeX in the end, and has the downside of less tooling and less standardisation. There's so many flavours of markdown now that it's hard to find correct information. Thoughts? Maybe something @rowan and @choldgraf would be interested in weighing in on.

susanleemburg, to random
@susanleemburg@neuromatch.social avatar

OMG but I hate Python.

I just want to run some spike detection code I made a while ago.
Instead of spikes I get a weird error. So now I need to update package 1, which requires updating package 2, 3, 4, 7, and 28, which in turn want a newer version of python (except package 9 which refuses to work now of course), so I also need to reinstall anaconda completely (fuck knows why the upgrade button never works)...
And of course none of that actually runs, so I need to figure out how to make things go in a docker container that is in turn wrapped in whatever the hell a singularity is?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @elduvelle @susanleemburg I had the opposite feeling, that everything with Matlab for me was always a struggle and that it was a joy to move over to Python for that reason. However, it's definitely the case that the Python ecosystem has gotten worse in terms of dependency hell than it used to be. I do wonder if the easy availability of virtual environments and docker etc. have made it easier to be lazy about backwards compatibility when developing packages. We test @briansimulator on a huge array of combinations of different versions of python, operating system, etc. to guard against this and it's not that difficult to set up a continuous integration infrastructure to do this using GitHub actions. I wish more people would.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@susanleemburg @jonny @elduvelle @briansimulator I don't think it's your fault, it's the lack of backwards compatibility in packages. It should just work.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @susanleemburg @elduvelle @briansimulator I don't agree that enforcing backwards compatibility is a bad idea. Otherwise you get the problem that package X requires a particular range of versions of Y, but package Z requires a non overlapping range of versions of Y meaning that X and Z can't be used together. Python ecosystem rife with this sort of problem. Backwards compatibility eliminates this problem. Just upgrade everything.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny I'll agree with there being times when it's necessary. We broke compatibility once with Brian in it's 17 years so far, and that seems like a reasonable frequency. But we also made a new package name so actually no breaks in compatibility. My worry is when breaks are frequent enough that you can't install two packages because of non overlapping requirements. This happened to me more than once, so it's a real problem.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Sometimes I regret staying up until 2am, but not tonight. Saw the northern lights visible by eye in London. Never thought that would be possible. Didn't quite look as good by eye as this 6s exposure, but was still amazing. Fun atmosphere on parliament hill too.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

UX peeve. Lamps that you have to tap repeatedly to adjust brightness so that if you want it to get less bright you have to cycle through more bright first. Bring back clunky analogue switches. Touch interface is bad for everything except a phone.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle how do these things get made? By people who don't actually use stuff?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@BorisBarbour @elduvelle is there that much difference between a touch and physical switch?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@BorisBarbour @elduvelle sigh. Enshittification not just for software.

neuralreckoning, to science
@neuralreckoning@neuromatch.social avatar

Thought about hypothesis testing as an approach to doing science. Not sure if new, would be interested if it's already been discussed. Basically, hypothesis testing is inefficient because you can only get 1 bit of information per experiment at most.

In practice, much less on average. If the hypothesis is not rejected you get close to 0 bits, and if it is rejected it's not even 1 bit because there's a chance the experiment is wrong.

One way to think about this is error signals. In machine learning we do much better if we can have a gradient than just a correct/false signal. How do you design science to maximise the information content of the error signal?

In modelling I think you can partly do that by conducting detailed parameters sweeps and model comparisons. More generally, I think you want to maximise the gain in "understanding" the model behaviour, in some sense.

This is very different to using a model to fit existing data (0 bits per study) or make a prediction (at most 1 bit per model+experiment). I think it might be more compatible with thinking of modelling as conceptual play.

I feel like both experimentalists and modellers do this when given the freedom to do so, but when they impose a particular philosophy of hypothesis testing on each other (grant and publication review), this gets lost.

Incidentally this is also exactly the problem with our traditional publication system that only gives you 1 bit of information about a paper (that it was accepted), rather than giving a richer, open system of peer feedback.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny well the first one seems to be arguing for longer spent before doing a hypothesis test so arguably that's an even lower information rate overall. 😉 The second one seems closer but I haven't read past first page yet. Does it talk about how we could have a richer error signal?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny will definitely read, looked interesting and that's a strong recommendation.I think my point is maybe something like: if the real value is not the output of the experiment but the exploratory work, shouldn't we be teaching this and valuing it more highly rather than denigrating it as fishing trips and rejecting grants on this basis?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny I don't think it's just the publishing system but it's surely an unhealthy combination!

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny right sorry that's what I meant. Just so used to railing against the publishing system that I typed it without thinking. 😉

elduvelle, (edited ) to random
@elduvelle@neuromatch.social avatar

still asking $60 for 1 year of 6Gb… 😭 @zotero I love you all but it’s 2024 now, maybe you could either reduce the price or increase what we get for it?

Edit: not even 60… just SIX

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle @zotero I think of it as a way to support development and sustainability. Happy to pay since I can, but also happy for others to use solutions like webdav etc.

tiago, to python
@tiago@social.skewed.de avatar

Good news everyone! A new version of :gt: graph-tool is just out! @graph_tool

https://graph-tool.skewed.de

:gt: @graph_tool is a comprehensive and efficient :python: Python library to work with networks, including structural, dynamical and statistical algorithms, as well as visualization.

It uses :cpp: C++ under the hood for the heavy lifting, making it quite fast.

This version includes new features, bug fixes, and improved documentation: https://graph-tool.skewed.de/static/doc/index.html

One of the new features is scalable and principled network reconstruction: https://graph-tool.skewed.de/static/doc/demos/reconstruction_indirect/reconstruction.html

Single line installation:

Anaconda ⤵️
conda create --name gt -c conda-forge graph-tool

Homebrew ⤵️
brew install graph-tool

Debian/Ubuntu ⤵️
apt-get install python3-graph-tool

Gentoo ⤵️
emerge graph-tool

Docker ⤵️
docker pull tiagopeixoto/graph-tool

You can also play it with in colab: https://colab.research.google.com/github/count0/colab-gt/blob/master/colab-gt.ipynb

@networkscience
@datascience
@python

image/png
image/png

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@tiago @graph_tool @networkscience @datascience @python looks cool! Have you seen our stochastic gradient descent layout algorithm? Fast and high quality with c++ code you can help yourself to.

https://github.com/jxz12/s_gd2

da5nsy, to Neuroscience
@da5nsy@social.coop avatar

A little post with some recommendations of folks who I enjoy seeing posts from!

@jonny - with lefty politics flair.

@HeavenlyPossum - / history and that always seems to cut to the heart of the matter.

@liaizon - / , open source hardware () and general interesting shit.

@neuralreckoning - wacky ideas around .

@chartgerink for innovation and thoughtful social commentary.

🙌🙌🙌🙌🙌

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@da5nsy @jonny @HeavenlyPossum @liaizon @chartgerink this is the best and most accurate description of me. Thanks! 😀

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Three grant rejections in one week. Sigh.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle oh that sucks, sorry. 😞

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Now seems hard to deny that the main risk for universities is the actions of those at the very top. Not just in the USA btw. So how do we address this and reclaim our universities?

neuralreckoning, to science
@neuralreckoning@neuromatch.social avatar

Suppose you were a funder wanting to design a system to fund science projects that were bottom up rather than top down. How would you do it?

I think you'd want to restrict it to non-faculty to start with, and have some sort of consensus-building rather than competitive approach. Like, maybe you could have an initial round where people proposed ideas, followed by a second round where people indicated who they'd be willing to work with and which aspects of their ideas they'd be willing to drop or modify in order to build consensus. Possibly you might need multiple rounds like this until you iterated on a solution that worked.

Would there by problematic hidden power dynamics in an approach like that? I guess so, there always are. But maybe still better than top down approach?

And is there any chance of finding a funder who would be willing to experiment with such an idea? Or any existing examples of experiments like that? Or more generally, examples of funders taking a non-competitive approach?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • GTA5RPClips
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • osvaldo12
  • ngwrru68w68
  • cubers
  • provamag3
  • InstantRegret
  • everett
  • ethstaker
  • modclub
  • khanakhh
  • Durango
  • tester
  • anitta
  • Leos
  • normalnudes
  • cisconetworking
  • megavids
  • lostlight
  • All magazines