@neuralreckoning@neuromatch.social
@neuralreckoning@neuromatch.social avatar

neuralreckoning

@neuralreckoning@neuromatch.social

I'm a computational neuroscientist and science reformer. I'm based at Imperial College London. I like to build things and organisations, including the Brian spiking neural network simulator, Neuromatch and the SNUFA spiking neural network community.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

So what would happen if someone just started fake signing documents they didn't actually want to sign? Presumably this wouldn't actually work in letting them later claim they didn't sign it? I'm thinking about someone slightly illegibly scribbling "signed under duress". Illegible enough not to be obvious but legible enough to be obvious on a close look.

Just asking a question btw, not suggesting that I or anyone else should do this, obviously.

jonny, to random
@jonny@neuromatch.social avatar

Reading this: https://pubs.acs.org/doi/10.1021/acs.chemmater.4c00643
The paper that @404mediaco talks about here: https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/

And its amazing how the one true function of "AI" across domains is to generate a bunch of plausible garbage that falls apart on closer inspection. "Sure they generated a bunch of chemical formulae... that are just regular compounds written weirdly, could only exist at 0K, or use elements that do not appear in usable quantities in the universe."

Youd think there would be some wins in there even by accident, but it turns out the thousand monkeys at a typewriter approach also needs a corresponding million monkeys to sift through the trash.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @404mediaco it keeps happening to these deep mind papers and the hype continues.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Someone is proposing to delete the @briansimulator wikipedia article because the software is "not notable". Please do join the discussion if you are a wiki editor and feel that it is notable.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Brian_(software)

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle @briansimulator that's part of my argument!

neuralreckoning,
@neuralreckoning@neuromatch.social avatar
adredish, to random
@adredish@neuromatch.social avatar

It's really unfortunate that reviews are seen as competitive and confrontational. I want my reviews to be suggestions to help the authors.

Because of that, I don't want the reviews I write made public. Reviews are for the editor to assess the author's work and for the author to make the work better. They are not commentary for the public to understand the work.

Fixing the paper is the author's job. Not mine. I want to be able to say "There was a paper a long time ago by someone like XYZ who you should go read." Or "There's a whole literature here by people like ABC to go engage with." Finding that literature is the author's job. Not mine. I want to be able to say "what if X is not linear?" or "what if Y happens?". I don't want to have to work out the math myself. That's the author's job. Not mine. None of this should be public.

Moreover, I need a process to say "I have the following concerns with my own review. I have this knowledge and not that. So you can trust me here, but not there." (Which deanonymizes the review, BTW.) I need a process where I can say to the editor, I'm concerned about A, but it's not my knowledge-base, or I think they should fix B, but it's not that important if they don't. Deciding what of that to communicate to the authors is the editor's job. Not mine.

Given the current ecosystem of publishing reviews and of preventing me from providing context to the editors separate from information sent to the authors, I am very seriously considering turning down all review requests from now on. (But it seems unfair to add publications to the ecosystem while not doing my part to review them.)

😠​ Frustrated.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@adredish Disclaimer: I'm going to say something slightly extreme that I'm not 100% sure I agree with or not yet because I'm thinking out loud here.

If you thought your reviews were suggestions to the author to help them, then you were being tricked all along.

If authors want suggestions, they ask their colleagues who give them private feedback. A review cannot and never could be that. In a competitive process, you simply cannot give helpful suggestions as part of an evaluation. This is just false, and everyone on the receiving end knows it. Anything that is less than stellar and positive about the paper as it is makes it less likely the paper gets accepted. We have to stop promoting this false idea about reviews, and ideally we have to stop reviewing in this way.

Incidentally, it's the same thing that is wrong with annual appraisals. I don't know about your institution, but mine asks us to say what went well and badly in the last year and set targets for the next year. They say it's part of a conversation designed to help our professional development. But, they also use this feedback from the people involved in this "conversation" in our promotion process. So guess what? Everyone treats it as the bullshit that it is. They lie. They don't talk about the things that really didn't go well, and their worries and anxieties and the things they need help with. And if they are honest, they are more likely to be looked over for promotion in favour of those who lie, who end up running the institutions.

We have to stop engaging with these fundamentally dishonest processes.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@knutson_brain @adredish but it is a competition nonetheless. Not every paper gets published and your reviews are a key part in determining which do and which don't. Not everyone gets to stay in science, and which papers get published is a key part in determining that. I think we all have to remember when we're writing reviews that this is what we're doing. And we don't have to. If you want to do cooperative review, announce publicly that you'll give private feedback on manuscripts if people send them to you. But don't review for competitive journals if you're not comfortable with the role of determining who gets a paper and who gets to stay in science, because that's what reviewing is about.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@RossGayler @knutson_brain @adredish fair enough! Although not sure I see the need for a formal process in that case.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar
neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

If you're considering a life in academia it's worth watching this video and deciding if it's worth it to you or not. All of this is true.

https://www.youtube.com/watch?v=LKiBlGDfRU8

For me the answer is yes, despite all the problems, for two reasons.

Firstly, I'm lucky enough that I do have considerable freedom to work on the things that I'm interested in. If I was more interested in success or if I was on a 'soft money' position and forced to chase constant grants, I don't know if that would be true. But, such luck is rare.

Secondly, as a socialist I would feel very uncomfortable spending my creative energy on most of the non-academic things I'm qualified for: advertising and surveillance (i.e. tech companies), finance, or startups (making venture capitalists even richer). I could imagine academia getting bad enough that I'd make that choice, but for me it's not there yet. I completely understand that it is that bad for others and I mean no criticism of them.

In a way I suppose this is a sort of defence of academia, but it's a half hearted one at best. I think it's absolutely tragic and depressing that academia has become like this. Doing research should be one of the most joyful and creative things anyone could do with their lives.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@NicoleCRust I don't claim that everything that happened to her will or has happened to everyone, of course, but this stuff does happen and to be honest my impression from my own experiences, what I've been told by people I know, and from accounts I've read from people I don't know, is that it happens a lot. I don't think hers is actually an extreme case. And I think a lot worse happens too. What I don't want to do is give the impression that it's worse in academia than outside. I don't know enough to say whether that's true or not and I've heard some pretty hair raising stories from outside academia too, so a priori I've no reason to think it's better and wouldn't be at all surprised to find out it's worse.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle @NicoleCRust agreed that it's really better to talk about this stuff, both to potential researchers and members of the public.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@NicoleCRust @elduvelle the tax one is unfortunate because it's a freebie for the right who are quite happy to make academia even worse or destroy it entirely. It's true that most research is wrong and even wrongly concerned but I'd argue that's a necessary and unavoidable part of doing science. The other two points are kind of reasonable I think. Science isn't just those things, but it is partly those things. A significant part.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Fellowship opportunity for ECRs <4y post-PhD. Engineering. 5 year funding. Significant advantage given to people from "underrepresented groups" (see below). Internal deadline of May 6. Comp neuro has done well recently in our dept. Email me if interested.

https://raeng.org.uk/research-fellowships

"The Academy has identified the following groups that are currently clearly significantly underrepresented in UK engineering research:
• Women
• Black people, including those with any mixed ethnicity with Black ethnic background(s)
• Disabled people"

The application process is that candidates need to apply initially to our department, who will select up to 2 applicants to submit to the university as a whole, who select up to 4 applicants (at least 2 from underrepresented groups) for the national competition.

This is an engineering post so it can't be pure neuroscience, but we've had good success recently getting engineering fellowships for @marcusghosh (multimodal processing in the brain with possible applications) and @danakarca (spatially distributed spiking neural networks).

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Is anyone else feeling uncomfortable about the increasing punching down aspect to science sleuthing? Someone posted a list of all suspected undeclared uses of ChatGPT referenced on pubpeer and it felt like the majority were just researchers at less rich universities who clearly didn't speak English very well using it to translate.

tdverstynen, to random
@tdverstynen@neuromatch.social avatar

Today I discovered that there are certain "high profile" professors who are regularly courted for (and strategically accept) coauthorship on papers, with minimal effort or work, solely to increase the likelihood that the paper will be accepted at a top tier journal.

This is basically the Trump business model applied to science.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @tdverstynen yeah but it's the logic of the system really. Inevitable that this would happen the way we've set things up.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Advice to designers of bureaucratic forms. If you don't want to waste you and your colleagues' time reading LLM-generated bullshit, don't have a minimum word count. (And apologies to the people whose job is read these things but who didn't design them.)

jonny, to random
@jonny@neuromatch.social avatar

Watching this "on the Turing completeness of mov in x86 assembly"
https://youtu.be/R7EEoWg6Ekk

And returning to the thought that neuroscience really needs to be much more deeply in conversation with computer scientists, and especially reverse engineers, because all the questions we have about the representations of computation given incomplete, external information is exactly the same form of problem they deal with every day.

We are still at the level of correlating binned spike counts to external variables, and dont really consider the instruction set or the nature of computation in neural circuits. Traditional "decoding" analysis makes strong assumptions about the instruction set, ie. That time binned activity is the unit of computation, that are rarely if ever interrogated.

Manifold modeling is much closer but still doesnt have a way to parse fundamental units of computation. State transitions are almost too general to capture it, similar to how turning everything into mov makes the underlying program logic much more obfuscated.

There is extremely fruitful collaboration and cross pollination between computer types and neuroscientists types that isnt the argument about isomorphisms between ANNs and biological neural nets that takes up all the oxygen. The real juice in transdisciplinary computational neuroscience is in collaborations with RE, cryptographers, computability and type theorists.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny the world needs this. Make it happen somehow.

jonny, to random
@jonny@neuromatch.social avatar

Is there like a standard process for like "hey I noticed you made exactly the same argument as me using the same primary refs without citing me after we talked about this a bunch of times, would you mind adding a citation"

I dont like to think I "own" any ideas, and obviously it could be coincidence from working in the same space, but it does get tiring because it happens to me all the time, I think partially bc I self publish and institutionally brainwashed ppl dont think that "counts," and it would be nice to feel respected by people I respect.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny ugh I'm sorry, I hear about this happening so often. 😞 I suspect it's a function of publishing pressure, don't want to give reviewers and editors and reason to reject for not being "novel". Depending on the career status of the person doing it could be grimy careerism or simply fear and precarity. Stupid publishing system turning us all against each other.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

Nope. Delete. Sticking to the policy. If Nicole can’t say something nice, Nicole just doesn’t …

(Of course, you do you!)

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@NicoleCRust no idea what it's about but it's a good policy! 🙂

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

What if the reason that publishers are unwilling to make review processes public is that if we knew what the average peer review process looked like, we might not think it was such a sacred and essential part of science?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny or possibly better if you pay nothing...

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Asimov's first Foundation novel has a wonderful scene that I think prefigures the LLM arms race we're going through at the moment. A bunch of characters are using formal mathematical tools to analyse the meaning of verbose and seemingly eloquent political statements:

"That," replied Hardin, "is the interesting thing. The analysis was the most difficult of the three by all odds. When Hoik, after two days of steady work, succeeded in eliminating meaningless statements, vague gibberish, useless qualifications - in short, all the goo and dribble - he found he had nothing left. Everything canceled out."

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@albertcardona it's funny it stuck in my memory from the first time I read it when I was 14 or 15.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@albertcardona oh yeah, many times. Although I re-read them quite recently and I have to say I didn't enjoy them as much. The complete absence of female characters, the terrible dialogue. But the ideas are great.

RossGayler, to machinelearning
@RossGayler@aus.social avatar

Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a very simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).

@cogsci #CogSci #CognitiveScience #MathPsych #MathematicalPsychology #NeuralNetworks #MachineLearning

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @RossGayler @cogsci I'm very ignorant of statistics, but yeah I agree ML publications are usually pretty poor on this.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • mdbf
  • khanakhh
  • ethstaker
  • osvaldo12
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • Youngstown
  • everett
  • slotface
  • kavyap
  • anitta
  • cubers
  • Durango
  • modclub
  • tacticalgear
  • ngwrru68w68
  • tester
  • GTA5RPClips
  • provamag3
  • cisconetworking
  • normalnudes
  • Leos
  • megavids
  • lostlight
  • All magazines