@marcwhoward@neuromatch.social
@marcwhoward@neuromatch.social avatar

marcwhoward

@marcwhoward@neuromatch.social

Theoretical cognitive neuroscience
Boston University
Time, space, cognition

searchable

This profile is from a federated server and may be incomplete. Browse more on the original instance.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

I really like Tony Lindeberg's work. If I had read the old stuff when it came out (he was doing logarithmic Laplace spaces in the mid 1990s) we could have saved quite a lot of time. Year at the chalkboard can save you a day in the library I guess.

[2405.00318] Covariant spatio-temporal receptive fields for neuromorphic computing
https://arxiv.org/abs/2405.00318

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

It's interesting to me how photos of water look completely different from water.
In real life the visual appearance of water in this picture was inseparable from its motion.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar
marcwhoward, to random
@marcwhoward@neuromatch.social avatar
marcwhoward,
@marcwhoward@neuromatch.social avatar

@jonny it's the same ice actually---maybe ten feet away---just at different times.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Pretty delighted about this being up on bioRxiv. Rui Cao and Ian Bright (neither are on here as far as I know) did a great job...

The brain coded for both the time of past events and the time of future events with exponential ramps with a continuity of time scales. That is, Laplace transform of the past and Laplace transform of the future.

Ramping cells in rodent mPFC encode time to past and future events via real Laplace transform | bioRxiv
https://www.biorxiv.org/content/10.1101/2024.02.13.580170v1.abstract?%3Fcollection=

marcwhoward, to random
@marcwhoward@neuromatch.social avatar
marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Treating evaluations---grants, papers, apps---as if they are output from a (sometimes malevolent) random number generator is one of the adaptations that enables humans to function in academic science. Even when things are going well there is a lot of rejection. Important not to take any of that personally.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Autocorrect on my phone just changed "Dayan" to "Satan". How's your day going?

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Patterns in warming ice often look like neurons.

marcwhoward,
@marcwhoward@neuromatch.social avatar

@jonny no treatment of any kind. I take a lot of pictures then throw out the bad ones.

DrYohanJohn, to Neuroscience
@DrYohanJohn@fediscience.org avatar

"However, when similar flight paths are compared across conditions, the stability of the hippocampal code persists."

Doesn't this mean that "place cell" is a misnomer (for such firing patterns)? They're really "place+trajectory" cells. Shouldn't a "true" place cell be path-invariant?

Credit-assigning on such invariant reps is presumably needed somewhere in the system in order to drive flexible navigation, no?

https://www.nature.com/articles/s41586-022-04560-0

marcwhoward,
@marcwhoward@neuromatch.social avatar

@DrYohanJohn @WorldImagining
Eichenbaum's Law* states, "if you do an experiment to determine if neurons in the hippocampus code for task relevant variable X or Y, answer is always yes."
There are neurons tiling distance and neurons tiling time and neurons coding distance x time. (Same for many other variables ). If you want to think of the whole data structure you're forced to think of a pretty complex object. We talk about "place cells" or "time cells" (or or) because it's difficult to think about that object (and neuro people hate math). I don't think anyone seriously thinks "place cells" are a real thing separate from other variables (Howard certainly didn't), it's just a name that makes it easier to talk about experiments.

  • As far as I recall Howard never stated Eichenbaum's Law but he also did not object when I stated it and attributed it to him so I think it's ok .
marcwhoward,
@marcwhoward@neuromatch.social avatar

@DrYohanJohn @WorldImagining

There are path-invariant place cells for positional credit assignment, time cells for temporal credit assignment etc etc. But i agree that this approach seems like it would be fraught with challenges and in any event doesn't really draw on the power of this kind of representation.

Maybe feature-based isn't the way to go. Maybe we shouldn't think about the individual cells and instead think about the relationships that are coded across the whole population by this rich, continuous, multidimensional data structure.

This paper is an attempt to approach this problem thinking about temporal credit assignment. In principle one could generalize to other continuous dimensions.

https://arxiv.org/abs/2302.10163

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

I did a literal spit take when I read this:

"There's no way to get there without a breakthrough ... It motivates us to go invest more in fusion."
-Sam Altmann, Jan 2024

OpenAI CEO Altman says at Davos future AI depends on energy breakthrough | Reuters
https://www.reuters.com/technology/openai-ceo-altman-says-davos-future-ai-depends-energy-breakthrough-2024-01-16/

marcwhoward,
@marcwhoward@neuromatch.social avatar

@jonny
The good news is that fusion is ten years away (and has been since the 1970s)!

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Via Scott Brown, invitation to submit to a special issue of CB&B that Sarah Marzen is putting together (I'm helping co-edit).

Computational Brain and Behavior is holding a Special Issue on Sensory Prediction: Engineered and Evolved with a submission deadline of August 1, 2024. Papers will be published as they come; simply navigate to https://link.springer.com/journal/42113, click "Submit your manuscript", and note that you'd like to be considered for this Special Issue during the submission process. Please consider submitting! Details below. The Guest Editors for this special issue are, Sarah Marzen, James Crutchfield, Jared Salisbury and Marc Howard.

Engineers try to predict future input from past input; this can take the form of prediction of natural video, natural audio, or text, which has famously led to such products as Generative Pre-trained Transformer 3 (GTP3) and proprietary algorithms for stock market prediction. Organisms and parts of organisms may have evolved to efficiently predict their input as well, and the hypothesis that they do is a cornerstone of theoretical neuroscience, theoretical biology, and cognitive science. How one can design systems to predict input is still a matter of debate, especially when one has continuous-time input—input that has a state at every point in time, not just at specially sampled points. We aim to bring together research that approaches the question of how to design systems to predict input through the lens of biology with machine learning, information theory, and dynamical systems. This knowledge will help establish a foundation of theoretical neuroscience and theoretical biology to enable the scientific community to better calibrate and understand prediction products.

We hope that this Special Issue will cover a wide range of topics. Some can examine evolved neural systems, including the study of neurons in the retina, hippocampus, the visual cortex, and striatum as well as longstanding learning theory from mathematical psychology. Some can study engineering systems to better predict input through reservoir computing and training recurrent neural networks, in which reservoir weights are trained as well. These are just examples of what we might hope to solicit.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Really happy to see this again (looks like it's been revised and hopefully going to get published).

https://doi.org/10.1101/2022.12.01.518703

For the last 20 years there have been two more or less independent streams of computational cognitive models for EM: Temporal Context Models and Complementary Learning Systems. They've both been influential in cog neuro and in animal neurophys, but they are actually kind of contradictory. The main feature of TCMs---gradually drifting context---breaks the main feature of CLS---autoassociative hippocampal dependent recall. Most of us (Anna Schapiro being a notable exception!) have just kind of avoided this issue.

This will almost certainly not be the last word on computational models of EM merging the best features of TCM and CLS (full disclosure, my group is working along these lines), but it's definitely a step in the right direction.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

If i'm reading this correctly (?), they've got the tagging window down to a couple of minutes.
Which would enable a whole bunch of interesting experiments. We really need a good animal model of episodic memory retrieval...

https://doi.org/10.1016/j.celrep.2023.113592

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Pretty conflicted about this paper:

  1. Delighted serious neuroAI people are paying attention to the time cell thing...
  2. Doing comp neuro with LSTMs is... not my favorite.
  3. we've done a bunch of theory on time cells and built deep networks that we've shown crush LSTMs... would have saved a bunch of effort here.

Anyway, worth reading:
https://www.nature.com/articles/s41598-023-49847-y

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

Things comp neuro people spend a lot of time on that I'd wager do not literally describe anything in the brain:

  • Transformer attention
  • Temporal difference learning (Bellman equation)
  • RNNs that require a matrix to describe them (continuous attractor networks do not count; you can describe them just with the kernel)
marcwhoward,
@marcwhoward@neuromatch.social avatar

@NicoleCRust
So for #2 the Bellman equation solves a problem---estimate expected future reward without directly estimating the future---that is not faced by the brain. Insofar as the brain has an explicit temporal memory of the continuous past (and it certainly does), it is straightforward to construct a direct estimate of the distant future via simple Hebbian associations. We didn't know that in the 80's when Sutton and Barto were working on this problem or the mid-90's when the dopamine mapping was made. This paper makes these arguments in much more depth:
https://arxiv.org/abs/2302.10163

(will try to write answer to #3 later, time for dinner)

marcwhoward,
@marcwhoward@neuromatch.social avatar

@NicoleCRust
Here's 3.

My claim is that the brain contains basically no RNNs where you need a matrix of (O(N^2)) numbers to describe the recurrent dynamics. A random matrix RNN definitely has this property. Even if you sparsify the random matrix by zeroing out some random proportion you still need a matrix.

Probably easiest to illustrate my point with counter examples. Let's take a ring attractor. N neurons tile (2\pi). The connections from each neuron to it's neighbors is the same. So you're already down to (O(N)) numbers. But it's better than that, because the connections are local, which doesn't even depend on N any more. And if they have some simple shape, e.g. gaussian, you're down to how ever many parameters you need to describe that function. You can make similar arguments for any continuous attractor network.

Ok how about an RNN to make sequences. To take a (too) simple example, you get a sequence from a recurrent matrix with non zero connections at only at +1 off-diagonal entries for some sort of the neurons. So way less than (N^2) numbers. (There's a more precise argument to be made, see here:
https://doi.org/10.1162/neco_a_01288
if interested)

The motivation for this wager about the brain is that if you really needed (O(N^2)) numbers to describe dynamics of N neurons, those dynamics would be too complicated to be useful. If you have a sequence or a CAN any state you find is easily understandable.

marcwhoward,
@marcwhoward@neuromatch.social avatar

that would be great!

PhiloNeuroScie, to random
@PhiloNeuroScie@neuromatch.social avatar
marcwhoward,
@marcwhoward@neuromatch.social avatar

@adredish @PhiloNeuroScie
Been thinking a lot about the University business and the academic science business in particular.
I'm all for the US spending more money on research.
I would also think it's progress if they recommended fewer PhDs in the first place. And fewer postdocs too.

marcwhoward, to random
@marcwhoward@neuromatch.social avatar

You can get a broad range of timescales from RNNs and the math is pretty cool.

Isn't sufficient to make sense of how there's a continuum of time constants in C elegans (hard to imagine 300 neurons use recurrence) but seems reasonable for cortex. The other part of the problem is how the time constants are distributed equivalently for different stimulus dimensions...

https://doi.org/10.7554/eLife.86552

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • rosin
  • ngwrru68w68
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • thenastyranch
  • Youngstown
  • khanakhh
  • slotface
  • everett
  • vwfavf
  • kavyap
  • megavids
  • osvaldo12
  • GTA5RPClips
  • ethstaker
  • tacticalgear
  • InstantRegret
  • cisconetworking
  • cubers
  • tester
  • anitta
  • modclub
  • Leos
  • normalnudes
  • JUstTest
  • All magazines