@neuralreckoning@neuromatch.social
@neuralreckoning@neuromatch.social avatar

neuralreckoning

@neuralreckoning@neuromatch.social

I'm a computational neuroscientist and science reformer. I'm based at Imperial College London. I like to build things and organisations, including the Brian spiking neural network simulator, Neuromatch and the SNUFA spiking neural network community.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

tdverstynen, to ai
@tdverstynen@neuromatch.social avatar

Apparently, according to Blaise Aguera y Arcas at , AGI is defined as the abilities that LLMs do, thus LLMs have AGI and it has arrived.

Maybe the folks working on should study circular inference a little bit more.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@manisha @tdverstynen I think we have to recognise that whether we like it or not, what these companies are doing is exciting to a lot of people. Yes, part of that is just hype and the excitement of being part of something that is in the news a lot, with household names that everyone knows, and the prospect of vast amounts of money. But, part of it is the ideas too. The capabilities of these models - regardless of whether or not it should be described as AGI or not - are palpable to everyone. Of course I agree on the critiques and I'm terrified by the politics, but I don't think it serves us well to pretend there's nothing interesting going on and that everyone is just star struck.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@tdverstynen @manisha agreed, was more responding to Manisha's frustration (which I share) that so many neuro conferences are featuring these sorts of talks from people at big name tech companies these days.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Scientists do a lot of measuring, quantifying and applying algorithms to make decisions. In their scientific work, they do this with a very critical approach to what is being measured, with high standards of evidence to justify the decisions. But when they apply this to themselves, ranking students, papers and grant applications, for example, they don't question the measures or demand any evidence at all. Indeed, many will actually dismiss what little evidence there is on the basis of intuition or anecdote. I really struggle to understand this. How can you be so skilled at applying critical analysis in one part of your job and not even try to do the same in another equally large part of the same job?

Let me give an example. One of the committees I am part of at my university is about diversity and inclusion. In order to be certified by Athena SWAN we have to write a report every few years, and part of that report is measuring and reporting the number of female students in our courses, applications versus acceptance, etc. We're required to monitor these numbers and understand why and how they are changing. I've seen a number of successful reports from our and other universities, and not one of them has done a statistical test on the numbers. They just report things like the number of female students is lower than the number of males, but it has increased compared to last year. Can you imagine if you wrote that in a paper and tried to make a scientific claim based on that? But major decisions which affect how the university is run are based on this sort of reasoning. And the extraordinary thing is that the Athena SWAN organisation that judges these reports doesn't ask for statistical analysis, provides no guidance on how to do it, and the lack of it has never been mentioned in any of the feedback I've read.

This isn't an isolated example (I could list many more as can you all I'm sure), and it's not just limited to administration since it's true of student grades and peer review, which are pretty central to academic life.

I'm interested in thoughts on the psychology of why we do this and what we can do to change it? Either by measuring more critically or perhaps not measuring things that can't be meaningfully quantified and analysed.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

Incidentally I said all this at our meeting and now I'm in charge of coordinating our statistical advisory service to build a proper model. So I guess maybe I know why we don't do it. Most people would have correctly anticipated this and not said anything.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny so in this case the external group driving this was definitely created in the right way by people who genuinely wanted to change things for the better, but I do worry that over time it has lost its way somewhat and become more rigidly bureaucratic. Even now though they do avoid some of the problems you highlight. They're not bad, but the treatment of monitoring is problematic I think.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny absolutely agree. To one extent I don't have the option you suggest though. We have to have this certification because otherwise there's a whole raft of grant funding we're not able to apply for, and collecting these stats is one of the requirements. So my options are to leave the committee but then someone else has to do it, or stay and try to make something useful come out of it. I'm not sure that improving the stats is the best use of my time, but on the other hand what I suspect will come out of a more rigorous model is that most of the numbers we're interpreting as signal that we have to do something about are actually just noise. So, improving the statistical model could help us focus our time on the things that actually matter.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

THE has an article about paying peer reviewers because of high profits and falling wages. Haven't read it because paywall, but this seems like a weird argument. I can't think of a less efficient way to increase wages and decrease profit than filtering more money through one of the highest profit margin industries in the world. Because of course if they paid reviewers they would put prices up, not just accept lower profits.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny guffaws all round and they crack open the cigars and spirits.

tdverstynen, to random
@tdverstynen@neuromatch.social avatar

Normalize punching pharmabros at every opportunity.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@alexh @tdverstynen the very rich will always find a way to preserve some unfair advantage for their children. Doesn't mean we have to make it easy for them.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@alexh @tdverstynen private tutoring to gain education would be fine, but the main use is to gain advantage in exams and therefore future life chances. That's not fine. The best way to address this though is not to ban private tutors but to improve free education. Until recently relatively few people in the UK bothered with private healthcare because the free option was not only free but actually better (evident in the statistics).

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@alexh @tdverstynen but also the idea that there should be rich people who can have a better life than everyone else with more opportunity for cultural enrichment and physical activity is not absurd. It's socialism. It has quite wide support.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar
neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle @alexh @tdverstynen fortunately it's masto so I can just edit my post and make yours look weird.

jonny, (edited ) to random
@jonny@neuromatch.social avatar

One thing that sucks about being so broken and a vector of domination rather that cooperation is that, in the best case, they can be skillshares as much as anything else. In some code reviews I have given and received, I have taught and learned how to do things that I or the other person wished they knew how to do, but didnt.

That literally cant happen in the traditional model of review, where reviews are strict, terse, and noninteractive. Traditional review also happens way too late, when all the projected work is done. Collaborative, open, early review literally inverts the dreaded "damn reviewers want us to do infinity more experiments" dynamic. Instead, wouldnt it be lovely if during or even before you do an experiment, having a designated person to be like "hey have you thought about doing it this way? If not i can show you how"

The adversarial system forces you into a position where you have to defend your approach as The Correct One and any change in your Genius Tier experimental design must be only to validate the basic findings of the original design. Reviewers cannot be considered as collaborators, and thus have little incentive to review with any other spirit than "gatekeeper of science."

If instead we adopted some lessons from open source and thought of some parts of reviews as "pull requests" - where fixing a bug is somewhat the responsibility of the person who thinks it should be done differently, but then they also get credit for that work in the same way that the original authors do, we could
a) share techniques and knowledge between labs in a more systematic way,
b) have better outcomes from moving beyond the sole genius model of science,
c) avoid a ton of experimental waste from either unnecessary extra experiments or improperly done original experiments,
d) build a system of reviewing that actually rewards reviewers for being collegial and cooperative

edit: to be super clear here i know i am not saying anything new, just reflecting on it as i am doing an open review

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny yes absolutely! It's amazing how much of science as it is practiced is the exact opposite of how it should be, and would be if we didn't try to pretend it was a competitive capitalist marketplace.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

If you were to recalibrate, what would you do?

I always suspected I would do something like study those amazing desert ants that navigate via the earth’s magnetic field. But when thinking through the question “How do you want to spend the next 10 years?” more seriously (pretending there are few constraints), that’s not where I actually point myself.

Acknowledging that it’s a tremendously priveleged (and emotional) thought experiment, What would you do with your next 10 years, assuming that thing needs to be useful enough that it’s reasonably supported (and you would continue to get a paycheck)?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@NicoleCRust if I didn't have children and so could afford to get arrested, I'd get seriously involved in climate activism. If I was just doing what I find fun, it would probably be something to do with photography or coding up experimental ideas in interactive media.

jonny, to random
@jonny@social.coop avatar

Ok exercise heads I have been doing it but now I have too much energy and I am like crawling out of my skin, this has backfired

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@NicoleCRust @jonny after running I'm usually physically tired but quite often my brain is buzzing. Problem for me because I usually can only go after the kids' bedtime.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

What's missing to replace our current publishing system? What technical and social components do we need to build? My first suggestions below, but I'd like to hear feedback from others.

  • An easy (frictionless) and flexible way to edit and submit documents that can be permanently referenced and that you feel confident will stay accessible forever
  • An easy and semantically rich way to link between these documents (e.g. document A is a review of document B)
  • A way to view these documents that surfaces and highlights relevant related content (e.g. listing and summarising reviews, comments, related papers)
  • A way to automatically convert documents into any standard format (HTML, Word, LaTeX, PDF, ...) so that the system can co-exist with existing workflows (the legacy journal system for example)
  • A database storing all this data that isn't owned by a single institution, either commercial or public, but that is distributed or duplicated across all the universities and libraries of the world. A way for these research institutions to democratically decide which organisations can submit data into the database.

Edited to add: not interested in a conversation about whether or not we need the existing publishing industry. That argument is settled for me and the question I'm interested in in this thread is how to change things assuming we want to.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

And just to elaborate on that. I think that if we had these components, many communities could build additional social and technical layers on top of this that could do everything that different academic fields would need. It would enable low cost experimentation.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@adredish @tdverstynen @knutson_brain I'm also more fond of the idea of breaking papers down into smaller units published more frequently and not the construction of a mega paper over time but... Why not both? We really don't have to choose. There doesn't need to be a single right way. I might even use both approaches myself at times if I really had freedom to do what I thought was the most appropriate for any given bit of work.

danielbingham, to random
@danielbingham@mastodon.social avatar

Two years ago, I started a journey into academic publishing. I imagined using a reputation system to crowd source peer review and replace the journals.

Two years later, I don't think review can be crowd sourced.

https://theroadgoeson.com/crowdsourcing-peer-review-probably-wont-work

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@danielbingham thanks for writing this Daniel - it's really valuable to hear your insights after spending so much time on this.

My take on your diagnosis of the problem is that we don't need a "peer review" system, we need frictionless ways to publish and frictionless ways to highlight and surface related content. One of those ways - closest to peer review - being connecting papers to commentaries that highlight problems with them.

This also solves the 'bad actor' problem. If we're not trying to tie evaluation into this system but just connect scientists to work they might want to know about, the bad actor problem is not so severe.

You're absolutely right about the chicken and egg problem though. I think the only way to solve it is to build helpful systems that can temporarily live alongside the existing system but ultimately supplant it. Not an easy thing to achieve though.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @danielbingham to be fair to Daniel, he put a lot into this and I think he should really be praised for that. But I agree the lesson is not to give up but to learn and do it even better next time.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

I've been thinking a lot about how we could have a non-hierarchical science, and one idea has crystallised.

The way science is done now, senior scientists have a lot of decision making power: which papers get published, which grants get funded, who gets hired. This introduces a hierarchy and concentration of power that has both social problems (bias, well documented potential for abuse of trainees), as well as scientific ones (ideas that challenge old ways of thinking have a much harder time than they should).

However, I wouldn't want to entirely eliminate the collective expertise of senior scientists. It's always amazed me just how well some of them can cut through nonsense and see to the heart of an issue. I distinctly remember enthusiastically going to one of my postdoctoral advisors to talk about my latest complicated modelling idea and getting the response "yeah you could do that but what would it tell us about X?" and realising that they were completely right. I avoided months of fruitless work thanks to that one ten minute conversation.

But do they need to have decision making power to do that? I don't think so. We should give decision making power to junior scientists: they should decide what ideas they work on, how to carry out their research, where to do it, who to collaborate with, and what to publish. The additional role of senior scientists is to give the junior scientists advice, which those junior scientists are entirely free to ignore. You don't need to force people to listen to advice. If the advice is good, freely given and not binding, people will seek it out. And there's no reason it has to only be senior scientists who are in this advice giving role, and no reason that as a senior scientist you need to be in this role if you don't want to be.

This inverts the power dynamics in a really progressive way. With this approach, there's no way to impose your idea of how science should be done on anyone, instead you have to persuade them. This is exactly how it should be. By placing arbitrary authority at the heart of science we've made it unnecessary for established ideas to argue for their value, because the holders of those ideas can just deny publication, grants and jobs to those who disagree. Why bother arguing when you can do that?

An obvious follow-up question is: OK, but then how do you allocate funding? It's a good question and one I'm happy to discuss ideas about. But it's not a case of us having a good answer already and needing a strong argument for an even better way. The current system is a hierarchy whose very nature is contrary to the basic values of science. I suspect almost any alternative would be better. Personally, without a clear winner in mind, I suspect the best approach would be heterogeneous: let's try out different ideas and see what works instead of all the countries in the world converging more or less on variations of this same basic formula.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@albertcardona you can try to do it this way (and it's a good thing), but it has limits. Firstly not everyone does. Lots of lab heads treat the more junior people in their lab as their employees to give orders to. But even if you try to be a benevolent dictator, everyone is still aware who has the power, no? Assuming you're not able to give lifetime jobs to people, they're going to be relying on you for many things, like job renewal, or championing them in terms of publications, introductions and references for future jobs, etc.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@GunnarBlohm yes! But also getting rid of the decision making power. That's key.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@UlrikeHahn @GunnarBlohm well let's put it this way. Those decisions are coming from somewhere, and it's not from below.

jonny, to random
@jonny@neuromatch.social avatar

I never want to get to a point where I have to turn down random meetings with people asking for help with stuff. I take a meeting or two a week thats just like "how do I do this thing" or "how are you thinking about this" and they are often my favorite part of the (work) week. I have seen people on here host "open office hours" in a more structured way where they set aside an hour or two just for helping ppl with stuff, and I love that.

Thats one of those big hidden harms of overworked profs having to tend grants all the time, all that accumulated wisdom and no time to share it.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny for the last year at least I've been wanting to set up a regular global office hour where I'm in a zoom meeting with a public link and anyone can come to talk to me about anything. How to solve a problem with Brian simulator. What I meant in one of my online lectures. Thoughts on ideas about spiking neural network. Advice on applying for PhDs. Plans for the anarchist revolution. Whatever. I love the idea. I just haven't had the stability in my life to actually do it. Kids, work deadlines, ... I will try to make it happen though, when I can.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • mdbf
  • tacticalgear
  • osvaldo12
  • InstantRegret
  • DreamBathrooms
  • cubers
  • magazineikmin
  • thenastyranch
  • khanakhh
  • Youngstown
  • slotface
  • kavyap
  • megavids
  • ethstaker
  • tester
  • GTA5RPClips
  • Durango
  • modclub
  • Leos
  • ngwrru68w68
  • everett
  • anitta
  • cisconetworking
  • provamag3
  • normalnudes
  • lostlight
  • All magazines