@neuralreckoning@neuromatch.social
@neuralreckoning@neuromatch.social avatar

neuralreckoning

@neuralreckoning@neuromatch.social

I'm a computational neuroscientist and science reformer. I'm based at Imperial College London. I like to build things and organisations, including the Brian spiking neural network simulator, Neuromatch and the SNUFA spiking neural network community.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

"the challenges that science is experiencing now ... are due to a lack of emphasis on ... the hard intellectual labor of choosing, from the mass of research, those discoveries that deserve publication in a top journal"

🤔

https://www.science.org/doi/10.1126/science.ado3040

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Is offering to add random people as authors on a paper - in order to share and thus reduce publication costs - fraud or a valid response to excessive publication costs and broken performance metrics?

(Not planning to do this myself btw, I'm fortunate enough not to need to, but not everyone is so lucky.)

Cc @deevybee

neuralreckoning, to academia
@neuralreckoning@neuromatch.social avatar

So oral exam at end of PhD. Good idea or just a tradition that doesn't make any sense any more? What are the good things about them? If we didn't do them, how else could we get those good things?

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

I've been thinking a lot about how we could have a non-hierarchical science, and one idea has crystallised.

The way science is done now, senior scientists have a lot of decision making power: which papers get published, which grants get funded, who gets hired. This introduces a hierarchy and concentration of power that has both social problems (bias, well documented potential for abuse of trainees), as well as scientific ones (ideas that challenge old ways of thinking have a much harder time than they should).

However, I wouldn't want to entirely eliminate the collective expertise of senior scientists. It's always amazed me just how well some of them can cut through nonsense and see to the heart of an issue. I distinctly remember enthusiastically going to one of my postdoctoral advisors to talk about my latest complicated modelling idea and getting the response "yeah you could do that but what would it tell us about X?" and realising that they were completely right. I avoided months of fruitless work thanks to that one ten minute conversation.

But do they need to have decision making power to do that? I don't think so. We should give decision making power to junior scientists: they should decide what ideas they work on, how to carry out their research, where to do it, who to collaborate with, and what to publish. The additional role of senior scientists is to give the junior scientists advice, which those junior scientists are entirely free to ignore. You don't need to force people to listen to advice. If the advice is good, freely given and not binding, people will seek it out. And there's no reason it has to only be senior scientists who are in this advice giving role, and no reason that as a senior scientist you need to be in this role if you don't want to be.

This inverts the power dynamics in a really progressive way. With this approach, there's no way to impose your idea of how science should be done on anyone, instead you have to persuade them. This is exactly how it should be. By placing arbitrary authority at the heart of science we've made it unnecessary for established ideas to argue for their value, because the holders of those ideas can just deny publication, grants and jobs to those who disagree. Why bother arguing when you can do that?

An obvious follow-up question is: OK, but then how do you allocate funding? It's a good question and one I'm happy to discuss ideas about. But it's not a case of us having a good answer already and needing a strong argument for an even better way. The current system is a hierarchy whose very nature is contrary to the basic values of science. I suspect almost any alternative would be better. Personally, without a clear winner in mind, I suspect the best approach would be heterogeneous: let's try out different ideas and see what works instead of all the countries in the world converging more or less on variations of this same basic formula.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Idea for a new type of online "conference". Instead of focussing on talks, the idea would be to collectively come up with an annual, short, readable review of new work and trends in the field over the last year. Could that work? Would you be interested in taking part if it could?

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

What's missing to replace our current publishing system? What technical and social components do we need to build? My first suggestions below, but I'd like to hear feedback from others.

  • An easy (frictionless) and flexible way to edit and submit documents that can be permanently referenced and that you feel confident will stay accessible forever
  • An easy and semantically rich way to link between these documents (e.g. document A is a review of document B)
  • A way to view these documents that surfaces and highlights relevant related content (e.g. listing and summarising reviews, comments, related papers)
  • A way to automatically convert documents into any standard format (HTML, Word, LaTeX, PDF, ...) so that the system can co-exist with existing workflows (the legacy journal system for example)
  • A database storing all this data that isn't owned by a single institution, either commercial or public, but that is distributed or duplicated across all the universities and libraries of the world. A way for these research institutions to democratically decide which organisations can submit data into the database.

Edited to add: not interested in a conversation about whether or not we need the existing publishing industry. That argument is settled for me and the question I'm interested in in this thread is how to change things assuming we want to.

neuralreckoning, to ChatGPT
@neuralreckoning@neuromatch.social avatar

Is it well known that can't answer questions about matching parentheses? I just tried a couple like ([(([]))]) on a whim and it failed spectacularly. It also failed to tell me if a python expression like this was valid or not.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

I can see why gpt might seem good for coding and writing but I don't want to write code or text that I don't understand. Anything it generated I'd have to go over really carefully to make sure I understood or agreed with. I don't see this as likely to take less time than just writing it in the first place. The bottleneck for me is not the typing it's the sense making and I don't trust gpt with that.

neuralreckoning, to philosophy
@neuralreckoning@neuromatch.social avatar

question from someone who read stuff on this years ago but nothing from . Is it generally understood to be a memory phenomenon? It seems logically that it must be (see below), but that's not the way the philosophy stuff I have read about it discussed it.

My argument is that the only thing we could be talking about if we're talking consciousness is things that have made their way into a specific memory subsystem (the ones that are accessible to our language systems), otherwise we wouldn't be able to talk about it. Similarly, anything that has made its way into that memory subsystem would also be something we were conscious of. In other words, consciousness is just the set of things that go into that subsystem.

So is consciousness just the study of some particular memory subsystem and the way it interacts with other systems like language? And if we don't understand how memory works, can we understand anything about consciousness?

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

A short argument for why the big publishers cannot be part of a publishing reform effort

Science is stuck in a vicious cycle it is hard to escape from. The decision to publish a scientific paper is made based on an evaluation of its likely importance and technical correctness. Scientists are evaluated based on these publication decisions, and resources (jobs, grants and promotions) are distributed accordingly.

The current system distorts scientific priorities. Science is incredibly competitive, resources are allocated on a short term basis, and the primary metric used to evaluate scientists is their publication record. As a consequence, there is an unavoidable pressure to select problems and design studies that can lead to results that are likely to be favourably evaluated and published in the short term. This is in opposition to the long term scientific value, a fact that appears to be widely acknowledged by working scientists (https://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process).

The current system is a vicious cycle and stable equilibrium. In principle, we could choose to evaluate scientists and their work in a better way. However, no individual or small group can do this alone. If an institution chooses to hire scientists who do work that they believe will be of enduring scientific value despite being unlikely to win short term grant funding, they will take a huge financial hit. Public research is under such severe resource constraints that this is simply not feasible for most institutions even if they wished to do so. Similarly, a public funding body that makes decisions based on long term scientific value and not short term publishability is likely to be able to count fewer high profile papers in their output, and compared to other funding bodies will appear to be underperforming when they are reviewed at the government level. Individual scientists have even less flexibility than these institutions.

Journal prestige cements this problem. It is the widespread availability of an easily calculated metric based on journal prestige that makes this cycle so hard to break. If there were no such metric, different groups could try different approaches and the effect would not be so obvious in the short term. The availability of the metric forces all institutions to follow the same strategy, and makes it hard to deviate from this strategy.

The majority of big publishers commercial value rests on their journal prestige. If there were no funding implications to publishing in one journal rather than another, scientists would be free to choose based on price or features. There are widely available solutions with better features at virtually no cost. Consequently, the entire business model of these publishers would collapse without the journal prestige signal.

Big publishers therefore cannot be part of the needed reforms. The success of these reforms would untie the evaluation of the quality of scientific work from the journal it is published in, and this would destroy the business model of these publishers. They will therefore do everything in their power to resist such reform.

Divorcing from the big publishers will not be enough. Journal prestige is the cement of the current negative stable equilibrium, but eliminating that will not guarantee a globally better system. We need systems for publishing and evaluating science that is diverse and under the control of researchers. This is what we intend to do with Neuromatch Open Publishing (https://nmop.io/).

neuralreckoning, to Neuroscience
@neuralreckoning@neuromatch.social avatar

Interesting article from the new neuro magazine The Transmitter:

https://www.thetransmitter.org/systems-neuroscience/why-and-how-we-need-to-professionalize-neuroscience/

I worry that anything that increases the participation costs of doing science will reduce diversity and slow progress. I wonder: if we had a research culture focussed on cooperation rather than competition, would this problem even exist?

fao @jonny

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

My daughter doesn't like cake so tomorrow we're going to find out what happens when you try to serve her alternative idea at a birthday party of 6 and 7 year olds... Wish me luck and if you don't hear from me again, know that it was all worth it.

neuralreckoning, to science
@neuralreckoning@neuromatch.social avatar

Thought about hypothesis testing as an approach to doing science. Not sure if new, would be interested if it's already been discussed. Basically, hypothesis testing is inefficient because you can only get 1 bit of information per experiment at most.

In practice, much less on average. If the hypothesis is not rejected you get close to 0 bits, and if it is rejected it's not even 1 bit because there's a chance the experiment is wrong.

One way to think about this is error signals. In machine learning we do much better if we can have a gradient than just a correct/false signal. How do you design science to maximise the information content of the error signal?

In modelling I think you can partly do that by conducting detailed parameters sweeps and model comparisons. More generally, I think you want to maximise the gain in "understanding" the model behaviour, in some sense.

This is very different to using a model to fit existing data (0 bits per study) or make a prediction (at most 1 bit per model+experiment). I think it might be more compatible with thinking of modelling as conceptual play.

I feel like both experimentalists and modellers do this when given the freedom to do so, but when they impose a particular philosophy of hypothesis testing on each other (grant and publication review), this gets lost.

Incidentally this is also exactly the problem with our traditional publication system that only gives you 1 bit of information about a paper (that it was accepted), rather than giving a richer, open system of peer feedback.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

If you're considering a life in academia it's worth watching this video and deciding if it's worth it to you or not. All of this is true.

https://www.youtube.com/watch?v=LKiBlGDfRU8

For me the answer is yes, despite all the problems, for two reasons.

Firstly, I'm lucky enough that I do have considerable freedom to work on the things that I'm interested in. If I was more interested in success or if I was on a 'soft money' position and forced to chase constant grants, I don't know if that would be true. But, such luck is rare.

Secondly, as a socialist I would feel very uncomfortable spending my creative energy on most of the non-academic things I'm qualified for: advertising and surveillance (i.e. tech companies), finance, or startups (making venture capitalists even richer). I could imagine academia getting bad enough that I'd make that choice, but for me it's not there yet. I completely understand that it is that bad for others and I mean no criticism of them.

In a way I suppose this is a sort of defence of academia, but it's a half hearted one at best. I think it's absolutely tragic and depressing that academia has become like this. Doing research should be one of the most joyful and creative things anyone could do with their lives.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Logo for "Neuroscience for machine learners" course - what do you prefer? (Poll in next post, can't have image and poll in one.) I'm concerned A&B look like "NeuroML" (which is a thing) rather than "neuro4ml".

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Scientists do a lot of measuring, quantifying and applying algorithms to make decisions. In their scientific work, they do this with a very critical approach to what is being measured, with high standards of evidence to justify the decisions. But when they apply this to themselves, ranking students, papers and grant applications, for example, they don't question the measures or demand any evidence at all. Indeed, many will actually dismiss what little evidence there is on the basis of intuition or anecdote. I really struggle to understand this. How can you be so skilled at applying critical analysis in one part of your job and not even try to do the same in another equally large part of the same job?

Let me give an example. One of the committees I am part of at my university is about diversity and inclusion. In order to be certified by Athena SWAN we have to write a report every few years, and part of that report is measuring and reporting the number of female students in our courses, applications versus acceptance, etc. We're required to monitor these numbers and understand why and how they are changing. I've seen a number of successful reports from our and other universities, and not one of them has done a statistical test on the numbers. They just report things like the number of female students is lower than the number of males, but it has increased compared to last year. Can you imagine if you wrote that in a paper and tried to make a scientific claim based on that? But major decisions which affect how the university is run are based on this sort of reasoning. And the extraordinary thing is that the Athena SWAN organisation that judges these reports doesn't ask for statistical analysis, provides no guidance on how to do it, and the lack of it has never been mentioned in any of the feedback I've read.

This isn't an isolated example (I could list many more as can you all I'm sure), and it's not just limited to administration since it's true of student grades and peer review, which are pretty central to academic life.

I'm interested in thoughts on the psychology of why we do this and what we can do to change it? Either by measuring more critically or perhaps not measuring things that can't be meaningfully quantified and analysed.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Can anyone explain something about university finances to me? The finance people say that grant overheads don't actually cover the cost of research. I would take this to mean that each grant is a net financial loss to the university. So why do they want us to get more grants?

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Early prototype of Mastodon thread viewer:

https://thesamovar.github.io/masto-thread-view/test.html

Just paste the URL of the thread into the box at the top and hit the "linear thread view" button below and it will give you a view of the thread with hierarchical replies sorted by how many engagements they got (reposts + favourites + replies).

It's very early days so it doesn't yet show any images, the design is not ideal, not optimised for mobile, etc. But I already find this useful for getting a feel of big threads.

My aim here is to give people a better way to navigate overwhelmingly large threads and to allow for a sort of archive of interesting threads. If we want to make Mastodon into a viable option for having scientific debates (e.g. alternative to peer review), we need some way to make them more accessible to outsiders and to surface the most interesting and relevant content.

So I'm particularly interested in hearing suggestions for features or other ideas on how to display threads in the context of long lasting discussions with some permanence to them.

At the moment it's just a very simple idea, but I have other ideas for how to display threads that are a bit wackier and I'll add these as extra buttons as and when I work on this. I'm also going to see how feasible it is to make this into a bookmarklet so you can just hit the 'render thread' bookmark in your browser and open a tab with this. Should be straightforward.

If you're interested, please feel free to post suggestions and issues either here or on github: https://github.com/thesamovar/masto-thread-view

May be of interest to @NicoleCRust @jonny

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

No gods, no masters.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

UX peeve. Lamps that you have to tap repeatedly to adjust brightness so that if you want it to get less bright you have to cycle through more bright first. Bring back clunky analogue switches. Touch interface is bad for everything except a phone.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Asimov's first Foundation novel has a wonderful scene that I think prefigures the LLM arms race we're going through at the moment. A bunch of characters are using formal mathematical tools to analyse the meaning of verbose and seemingly eloquent political statements:

"That," replied Hardin, "is the interesting thing. The analysis was the most difficult of the three by all odds. When Hoik, after two days of steady work, succeeded in eliminating meaningless statements, vague gibberish, useless qualifications - in short, all the goo and dribble - he found he had nothing left. Everything canceled out."

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

My university will hire a couple of undergrads to help turn my neuroscience course https://neuro4ml.github.io/ into an interactive textbook along the lines of @neuromatch academy https://compneuro.neuromatch.io. I'd like to try a little more though.

I'd like to write some extensions to JupyterBook so that I don't have to maintain separate slides but have everything integrated into one structure, including video recordings, so that you can be watching like a lecture, pause and you're already at the code you can run, etc.

Anyone seen anything like this done before? Any tips? Any thoughts on how to do it? Features it should have? Be ambitious! In my head I'm calling this "textbook of the future" just to give you an idea of how grandiose you should be. 😉 Cc @choldgraf @rowancockett

neuralreckoning, to Neuroscience
@neuralreckoning@neuromatch.social avatar

New (updated) !

Defining is hard: much history. We used toy ANNs to show structural and functional definitions not tightly related, resource constraints important, and we need to start thinking about temporal dynamics.

🧵 with @GabrielBena

https://arxiv.org/abs/2106.02626

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

How can we fix academic publishing? I just wrote a new article outlining my thoughts on this based on all the attempts I've seen, what has worked and what has failed, and finishing with the strategy we developed for @ScholarNexus. I'd love to hear your feedback!

https://thesamovar.github.io/zavarka/how-do-we-fix-publishing/

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Someone is proposing to delete the @briansimulator wikipedia article because the software is "not notable". Please do join the discussion if you are a wiki editor and feel that it is notable.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Brian_(software)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • InstantRegret
  • khanakhh
  • ngwrru68w68
  • osvaldo12
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • thenastyranch
  • everett
  • Youngstown
  • slotface
  • kavyap
  • provamag3
  • GTA5RPClips
  • Durango
  • cubers
  • modclub
  • tester
  • tacticalgear
  • cisconetworking
  • ethstaker
  • anitta
  • Leos
  • megavids
  • normalnudes
  • lostlight
  • All magazines