@tyrell_turing@fediscience.org
@tyrell_turing@fediscience.org avatar

tyrell_turing

@tyrell_turing@fediscience.org

Researcher in Montréal combining artificial intelligence and neuroscience. I post mostly science, occasional politics and house music.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

cian, to random
@cian@mstdn.science avatar

Made the mistake of flicking through a few Wikipedia articles on topics I am familiar with. Apart from some out of date statements, the main issue is just how random a lot of the points are. Eg this page on synaptic plasticity is really unrepresentative of the field state of the art imo: https://en.m.wikipedia.org/wiki/Synaptic_plasticity

Is this the kind of garbage students are reading and ChatGPT is training on?

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@cian

Yeah, agreed. I rarely see outright misinformation on Wikipedia articles where I know a lot.

But, I see a lot of weird, random stuff that I would never include in a succinct summary on the topic.

jgpausas, to biodiversity
@jgpausas@fediscience.org avatar
tyrell_turing,
@tyrell_turing@fediscience.org avatar

@jgpausas Man, this really looks like Ramon y Cajal's drawings!

tyrell_turing, to random
@tyrell_turing@fediscience.org avatar

Our collaboration with the Allen Institute is now up in J Neurosci!

tl;dr - the responses to violations of a pattern in sequences of images evolve differently in apical dendrites versus somata of V1 pyramidal neurons:

https://www.jneurosci.org/content/44/5/e1009232023

PS - Both the paper and the data are fully open!

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

Simple RNN models that capture neural network criticality / edge of chaos?

I'd like to play around with simple (tutorial-like) recurrent neural network models that capture the phenomenon of criticality. Something like the smallest possible number of recurrently connected model neurons that can recapitulate phenomenon (like information processing peaks for intermediate coupling weights).

Any leads?

tyrell_turing,
@tyrell_turing@fediscience.org avatar
tyrell_turing,
@tyrell_turing@fediscience.org avatar

@derstrudel @albertcardona @NicoleCRust

To your Q, @NicoleCRust, the key to criticality is just getting the right spectral properties in your recurrent weight matrices. Often, this will emerge simply from training with BPTT, if the task demands it.

But, there are also various Python tools for reservoir computing that make this easier for you:

https://www.sciencedirect.com/science/article/pii/S0952197622001713

https://www.biorxiv.org/content/10.1101/2023.05.31.543092v1.abstract

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@axoaxonic @NicoleCRust @derstrudel @albertcardona

I haven't read the paper, but intuitively, doesn't being at the edge of chaos imply being in between stability and instability? I would imagine it's all the same range of Lyapunov exponents...

tyrell_turing, to random
@tyrell_turing@fediscience.org avatar

This paper from @dyamins and Kalanit Grill-Spector looks pretty cool. Need to read, but looks like a demonstration of how topography mixed with a principle @ShahabBakht discovered a couple of years ago about self-supervised learning could explain a lot about visual cortex organisation:

https://www.biorxiv.org/content/10.1101/2023.12.19.572460v1

DrYohanJohn, to random
@DrYohanJohn@fediscience.org avatar

It seems very likely that similarities between real brains and artificial neural networks in performance and even feature-extraction have more to do with the shared data set and the presence of many free parameters than any structural or procedural commonality.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@DrYohanJohn

The data is a huge driving force, but many papers compare different losses and architectures, and show that some match the brain better than others.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@DrYohanJohn @neuralreckoning

I'm not sure I follow either of you...

  1. The number of parameters is immaterial, because you're not fitting the data, you're fitting a different task, then asking whether the model arrives at a solution that fits the data. So the number of parameters is not helping you with that fit for trivial reasons.
tyrell_turing,
@tyrell_turing@fediscience.org avatar

@DrYohanJohn @neuralreckoning

  1. The hard work you describe is precisely what people are trying to do, no? Like, concretely, what is it you think is not being done that should be?
tyrell_turing, to Neuroscience
@tyrell_turing@fediscience.org avatar

1/ What is the organization of mouse visual cortex across regions?

In our latest work led by Rudi Tong and Stuart Trenholm, now out on bioRxiv (https://biorxiv.org/content/10.1101/2023.11.03.565500v1) we mapped the "feature landscape" of mouse visual cortex.

Here is a thread about what we found.

tyrell_turing, to Neuroscience
@tyrell_turing@fediscience.org avatar

PSA:

The NSF AI Institute for Artificial and Natural Intelligence (ARNI) Is looking for postdocs!

Fellows will be based at @Columbia, NYC, but can work with ARNI researchers at multiple institutions.

More info: https://arni-institute.org

axoaxonic, to random
@axoaxonic@synapse.cafe avatar

I have no idea how to reconcile neural network models with the observed phenomena of representational drift and degeneracy. New models from scratch might be better, even tho the former type have been worked on for around like 80 years, but maybe there's a way

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@axoaxonic

I don't think anything so drastic is required. The evidence is that representational drift:

(1) Occurs in the null space and does not impact downstream readout (https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010716)

(2) Relates specifically to new experience, so is likely driven by additional learning (https://www.cell.com/neuron/fulltext/S0896-6273(23)00378-1).

Thus, I don't think new models from scratch are required, just a shift away from models trained only once and the addition of more contextual conditioning.

NicoleCRust, to random
@NicoleCRust@neuromatch.social avatar

6 thought provoking questions posed to @awaisaftab (psychiatrist) and myself (brain researcher) and we hit on so much:

The challenge of escaping reductionism. Theories of consciousness. Are mental disorders brain disorders? Why should anyone care about philosophy? Is epistemic iteration is failing? And what bits of brain research are awaiting their Copernican moment?

With nods to @summerfieldlab, @knutson_brain, @tyrell_turing, @Neurograce, @eikofried and so many more.

Read it all here (and let's discuss)!

https://awaisaftab.substack.com/p/advancing-neuroscientific-understanding

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@NicoleCRust @knutson_brain @neuralreckoning @awaisaftab @summerfieldlab @Neurograce @eikofried

Thanks for the clarification! To clarify a bit on my own, my reply was more of a call-back joke to the OG days on Twitter (per @neuralreckoning ).

But, I take your point well, and agree, it is important for people to realise that brains don't do sequential, discrete, passive processing of a stream of inputs using a step-by-step program. And so, in that sense, it's a metaphor (and a poor one).

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@NicoleCRust @knutson_brain @neuralreckoning @awaisaftab @summerfieldlab @Neurograce @eikofried

Also, I really like your point that it helps people to get people out of thinking that things like "attention" and "decision making" are discrete pieces of a program in our heads. 🙂

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@WorldImagining @NicoleCRust @awaisaftab @summerfieldlab @knutson_brain @Neurograce @eikofried

Strict identity - if we're using the broad definitions of computing and info processing from computer science, anything else implies magic/spirit.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@yoginho

Respectfully, I disagree. I don't think you're accurately summarising computability theory, nor the implications for neuroscience.

As Yoshua Bengio once said to someone at a workshop I was at (paraphrasing):

"Computation just means physics. Saying the brain is a computer is just a way of saying that it is a physical device. I don't even know what a non-computational theory of the brain means, unless you're talking about magic."

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@yoginho

Remember, as the very article from the Philosophy Encyclopedia describes, the original goal of both Turing and Church was simply to formalise the idea of an "effective method", i.e. a mechanical way of solving a problem.

If the Church-Turing thesis holds, then any problem that a mechanical (i.e. purely physical) system can solve represents a computable function, and the object implementing it is engaged in a computation.

tyrell_turing,
@tyrell_turing@fediscience.org avatar
tyrell_turing,
@tyrell_turing@fediscience.org avatar

@WorldImagining @NicoleCRust @awaisaftab @summerfieldlab @knutson_brain @Neurograce @eikofried

But, tl;dr:

According to the use-based definition from computer science, a computer is any object that can implement many different computable functions.

The brain fits that definition, ergo, it is literally a computer.

Moreover, if function the brain does is computable, then it is strict identity. As far as I know, no one has ever demonstrated that the brain can solve non-computable functions.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@yoginho

I don't really want to engage in an extended debate here, but I will just note this:

Turing and others very much so connected humans being able to do some calculation by rote to the ability to create a mechanical device to do that function.

So, there was always a direct connection to physics/mechanics, because the implication was always that if an effective method existed, then a physical/mechanical system could in principle implement the function for us.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@pinecone

It all depends on your definitions, per my article. According to the definitions I tend to operate with, algorithms and computing can occur in ways that look nothing like Turing machines (i.e. can be continuous, online, stochastic, etc.), so no contradiction.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@FlowVoid @pinecone

Very good point! But, we actually cover this in the article. The way that computer scientists use the word "computer", it is clear that they only mean objects that can compute a huge number of different computable functions, which is why most physical objects don't qualify.

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@FlowVoid @pinecone

Yes, 100% agree. If the brain can actually implement functions that no Turing machine could, then we would have a concrete reason, according to the definitions from computer science, to say that the brain is not a computer.

Personally, I've never seen a good demonstration of that. Penrose made some questionable claims long ago, but those never convinced me. If someone else has credible evidence I'd be very interested to see it!

tyrell_turing,
@tyrell_turing@fediscience.org avatar

@FlowVoid @pinecone @beneuroscience

There is an empirical question, but it's one I consider largely solved.

The vast majority of what brains do is trivially computable (moving your body, recognizing objects, planning travel, etc.). And, I have never seen any evidence to suggest that brains can implement non-computable functions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • megavids
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines