NicoleCRust,
@NicoleCRust@neuromatch.social avatar

Thoughts on these provocative ideas (about how research in psychology should proceed)?

The last author tipped me off to this one. Curious to hear impressions.

Beyond Playing 20 Questions with Nature: Integrative Experiment Design in the Social and Behavioral Sciences

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4284943

(also here, behind the BBS paywall: https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/beyond-playing-20-questions-with-nature-integrative-experiment-design-in-the-social-and-behavioral-sciences/7E0D34D5AE2EFB9C0902414C23E0C292)

The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment’s specific conditions. According to this view, which Alan Newell once characterized as “playing twenty questions with nature,” theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. The researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm—and with far greater efficiency.

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@NicoleCRust @jonny my personal take is that this paper has an important point but not a solution, because the full scale of the problem is (to my mind) still not fully grasped. To explain: imagine you are trying to understand a little agent-based model you have. It’s a complex dynamical system, so you can’t just pick out a few random parameter combinations and form local theories and hope to come up with deep understanding. You need to systematically explore the parameter space 1/2

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@NicoleCRust @jonny 2/2 that’s basically what the paper is saying though not in those words, and there is a sense in which that seems obviously right. The limitation I see is that I think the relevant “parameter space” for human behav. exp. is fundamentally not like the parameter space of my stylised ABM. I mostly do scenario based exp. - I can change experimental materials in infinite ways which are not ordered. We can’t chart this space in the way they envision - it’s all much much harder

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@NicoleCRust @jonny ? a better way to put this: I was a bit surprised by this paper given the authors because it slightly reads ‘piecemeal dustbowel empiricism didn’t work lets do dustbowl empiricism harder’. I take the fundamental problem of psych. to be the inherent flexibility of human responding. Change the context slightly, get a different response. So there’s gazillions of little paradigms that devolved into ‘it depends’. What we haven’t learned is what a meaningful theory is given that.

NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@UlrikeHahn @jonny
Fascinating! I’m working to flesh out a good analogy for this line of thought. Are you thinking of something maybe chaotic, like the weather? Where small changes to initial conditions have inpredictable long term effects?

The exceedingly simple logistic equation behaves in this way.
https://en.m.wikipedia.org/wiki/Logistic_map
In it’s chaotic regime, start it at 0.2 and it will do one thing; start it at 0.20000001 and it will do the same thing for awhile but diverge. If this simple equation does that, why not the brain?

But the weather is chaotic and we’ve figured it out insofar as we have equations that can predict it in the near term and we understand why it’s chaotic. I think your point is along the lines of: the equivalent of the 7 equations for weather prediction will be harder to find for the brain. I’m trying to pinpoint: why might we think that, exactly? Because there are likely hundreds? Or they are of a different type?

(No doubt we all agree that a good first step that needs to be made is acknowledging the brain is a dynamical system upfront. We haven’t tried much of that - how far will it take us?)

albertcardona,
@albertcardona@mathstodon.xyz avatar

@NicoleCRust @UlrikeHahn @jonny

We find in the brain strong feedback loops that sort of funnel in inputs, as in, a diversity of similar yet different input patterns end up inducing the same behavioral outcome, already observable at intermediate circuit steps far before getting to any motor commands. I'd call this a win: it's generalization, a form of resistance to noise. Similar effects must necessarily act at the single neuron level, to cope with small changes in temperature, nutrient supply, hydration, osmolarity. There is a range for parameter values within which metabolic processes work anyway, and the same may be true at a higher level for circuits.

MolemanPeter,
@MolemanPeter@neuromatch.social avatar

@NicoleCRust @UlrikeHahn @jonny
@NicoleCRust
Beggs started out to think the cortex works at the edge of chaos. But that appeared not to be tenable. Not for the cortex, but more so for processes at the subcortical level.
Beggs JM (2022): The cortex and the critical point: understanding the power of emergence. Cambridge, Massachusetts, The MIT Press.

NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@MolemanPeter @UlrikeHahn @jonny
My read is that the emerging evidence supports the brain at the edge of chaos hypothesis. For instance, it's the only regime where recurrent neural networks work. I summarize some of that work here:
https://www.thetransmitter.org/systems-neuroscience/is-the-brain-uncontrollable-like-the-weather/

MolemanPeter,
@MolemanPeter@neuromatch.social avatar
NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@MolemanPeter @UlrikeHahn @jonny
As a follow-up, here's a friendly tutorial that dives into some of it:
https://www.youtube.com/watch?v=dJHn1mPsq3A

MolemanPeter,
@MolemanPeter@neuromatch.social avatar

@NicoleCRust @UlrikeHahn @jonny
The difference with the weather being a (near) chaotic (complex) system appears to be that living organisms (including brains) keep themselves far from equilibrium, while the processes in the atmosphere return to equilibrium, ie energy differences are cancelled out in due time. That is why a hurricane stops, while your brain never stops, until.....

NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@MolemanPeter @UlrikeHahn @jonny
But this isn't true! The brain has all sorts of mechanisms to restore it. For instance, while seizures can last hours, they typically only last a few minutes.

FWIW: I've asked the recurrent neural network aficionados about this whole near versus far-from-equilibrium (and open versus closed system) business; the response that I got was, "That's just not a useful way to think about how a neural network works; nor is it a useful way to think about the brain."

beneuroscience,

@NicoleCRust @MolemanPeter @UlrikeHahn @jonny I don't have an expert take on this, but it seems that neural network aficionados are the exact people who would ignore this issue. They rather assume a fixed/closed system that they can get a handle on to formalize.

axoaxonic,
@axoaxonic@synapse.cafe avatar

@beneuroscience @NicoleCRust @MolemanPeter @UlrikeHahn @jonny

It might be related to spatiotemporal scale, since nonequilibrium systems exhibit local thermodynamic equilibrium (sec 3.2 of this https://courses.physics.ucsd.edu/2020/Fall/physics210b/Non-Eqbrm%20Thermo_Demirel%20and%20Gerbaud.pdf says it's common in simpler nonequilibrium systems, though it is also found in more complicated active matter systems such as bird flocks https://doi.org/10.1038/nphys3846 ). Even though the whole nervous system must be a dissipative nonequilibrium system, at the microscopic scales of smaller networks this might be okay to ignore without too much consequence, but idk if that's a good idea if the aim is biological realism.

This paper shows that some biological dynamics only appear in ANNs if they are in nonequilibrium states due to asymmetric connections, which are common in bioNNs. https://www.pnas.org/doi/10.1073/pnas.1310692110 These dynamics include oscillations (traveling waves?) and free energy minimization, which aren't present in the symmetric Hopfield networks they tested

beneuroscience,

@axoaxonic @NicoleCRust @MolemanPeter @UlrikeHahn @jonny Thanks for the refs! It seems problematic to assume what they refer to as microscopic reversibility holds in neural networks. That is, I don't think it's biologically realistic to assert that any state transition in the forward direction is equally likely as the same transition in the opposite direction (if biological realism is the goal, which it typically is not).

I agree with your second point too about asymmetric connections, can confirm from my own work!

18+ jonny,
@jonny@neuromatch.social avatar

@beneuroscience @axoaxonic @NicoleCRust @MolemanPeter @UlrikeHahn I actually came to neuroscience through a fascination with nonlinear dynamics, was mostly dismayed at how neuroscience at large mostly ignores (IMO) the most important feature of the brain as a system, and was then sad I had already mostly drifted away from it bc i had no-one doing that work around me once dynamical manifolds finally caught some foothold among cortex ppl (which is who i was among in grad school).

i have found that questions of equilibrium, stability, and scale in nonlinear dynamics have a deceptively tricky time mapping to the way those concepts are used in greater neuroscience. I also am finding the combination of ideas in this thread between methodology/epistemology and properties of dynamical systems that @NicoleCRust and @UlrikeHahn introduced to be really interesting because that's often how I think of things but haven't heard other people express it this way too. I wanna come back around to that after returning to the paper in the OP bc i think it's a good backdrop for this convo.

It's sort of sad to me that the paper in the OP makes the same that almost all papers of its kind do and treats the background of "the rest of research infrastructure" as being fixed while proposing some radical change to a circumscribed part of it. That mirrors the kind of questions of fixity and locality y'all bring up downthread. I think when you are proposing something that is well trod ground by hundreds of prior authors, and that something starts with making some structural mapping between your work and the existing work, but you don't really engage with that work and coin a new field 'research cartography', then something has gone really wrong somewhere. They describe some examples of their proposed model, but not other people proposing similar models, which sort of neatly illustrates the problem with the idea: why didn't they do integrative research

The biggest problem here is that they treat what are ultimately social dynamics of publishing and communication as a physical system, and with it assume several demonstrably untrue things - perfect information, no misincentives, good and shared intentions, unambiguous interpretation of experimental results, etc. Their diagnosis of the problem effectively reduces to "people aren't formulating their experiments in terms of standardized ontologies/theories, so we can't do systematic reviews on them." They continue describing the reasons why this is the case as "people don't think cumulative knowledge is important/think things are good enough" and "nobody has proposed a workable alternative," but of which are, again, demonstrably false. Nobody actually thinks scientific results work their way into cumulative understanding, so that point is effectively a strawman, and oh boy do i have a folder of hundreds of proposals just like this (and also decades of work spent trying to realize them) to show them.

But at the bottom of it I of course share the idea that we need to make our work more mutually intelligible to one another, to put it in conversation with prior work, and to have many layers of consensusbuilding across many different modalities. What does that "metric space of conceivable experiments" actually look like? If in your paper for radically transforming science, it still looks like Journal PDFs and Review Papers (but with more closely aligned measurements this time), then you have left everything interesting on the table. I think one of the things that is a cognitive stumbling block here is how difficult it is to imagine different dynamical regimes, and so most scientists revert to a kind of single-pointed teleology - not out of any sort of strong philosophical commitment, but because it's easy to imagine science working like "sample all the points in an ontological space, find the maximal point, that's the truth!" I'll leave aside my own beliefs about what is possible here tho.

This has an interesting resonance with the problems in neuroscience re: appreciating the dynamical reality of the brain. One reason the NHST-style one-off papers connected to very little theory work so well is that they are acceptable units of work for the machine of academia - recording a bunch of neurons, averaging them together, a t-test or two, that's a paper baby! There is/was effectively no room for the kind of exploratory experiments that are just about characterizing a system without a story, and so we still get mostly those kinds of papers. the proposed idea in this paper is like a grid search to construct a vector map in an ontological space, that assumes the space is neutral/true and so i don't find it all that compelling. The observation that very simple functions like the logistic map make incredibly complex spaces doesn't necessarily imply that the goal of research should always be to derive those functions. The same kind of irreversibility with uncertainty in complex systems that gives a cone of predictability beyond which you can't see past might make it so that, for one reason or another, those functions are simply unobservable to us (the brain is not a computer capable of representing all possible information equally either). There are other ways of characterizing systems other than identifying their generating functions (ironically, the authors of OP paper should read Vapnik, who is in some ways the godfather of the gradient ascent underpinning of their strategy, who writes all about the relationship between identifying vs. imitating the generator, core to ML). Even some chaotic systems on attractors will have some kind of bounding space to them, and for larger/less closed systems the state can even be itinerant between multiple regimes, so it is possible to have satisfying characterizations of systems that are still "it depends," but the "it depends" part can be meaningful! It is imo the most important part of any information, who it comes from, what perspectives are brought to bear on it, how it is describing the problem, and so on. Rather than the derivation of a solution, a fluid, evolving flow that still nonetheless has some structuring principles to that flow.

The hollow core is always that we don't have a means of constructing these kinds of contextual maps, and it is vitally critically important to realize that that's on purpose and not because we don't care or haven't thought of anything better. until we stop treating this as a strictly academic exercise that we can think our way out of, we'll get nowhere! the point that these ideas always reach is "but the incentives," and that should indicate to us that maybe the billion dollar industries that might profit from a perpetual churn of disconnected papers has some structuring influence on our ability to make some kind of cumulative, fluid consensus. It's like trying to go to the moon but constantly ignoring how gravity is holding you to earth.

idk that's some loose and altogether too long thoughts on the interrelatedness of experimental methodology, epistemology, dynamics as understood in neuroscience, and this paper. it would be nice to live in a world where we can pull and knit these threads together in a way that others can follow, and that's all i rly care about in science anymore lol.

18+ MolemanPeter,
@MolemanPeter@neuromatch.social avatar

@jonny I like your thoughts on this!

18+ NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@jonny @beneuroscience

Such great thoughts here, @jonny. You're capable of linking things together in ways that are striking. I'm book marking this so I can come back to it!

@axoaxonic @MolemanPeter @UlrikeHahn

18+ UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@jonny @beneuroscience @axoaxonic @NicoleCRust @MolemanPeter

Dear All, I've tried to make a drawing to help explicate what I was trying to get at and how it's different from Nicole's logis. eq.

Imagine trying to understand levels of group performance as A. Aalmatouq does in his great experiments. We're concerned with two variables -group size and amount of communication. These produce an outcome space - as shown in this figure.

1/2

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@jonny @beneuroscience @axoaxonic @NicoleCRust @MolemanPeter

2/n our lives would be easy if things were like a) more communication makes things better (group size does nothing), but they're much more likely to be like b) the two variables interact - but we're still ok -there is a simple functional relationship. Often, however, we have something like c) where the outcome has a really messy relationship with the two variables as you get when simulating a complex system [my ABM, N's log. eq. ]

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

3/n @jonny @beneuroscience @axoaxonic @NicoleCRust @MolemanPeter

c) their approach might still handle through sheer brute force (at least theoretically), but my claim is that often things are even worse, they are like d). It turns out that we also have to consider the nature of the task (math problem vs. word problem, specific content...). For their approach to be a solution, "task" would need to somehow be a dimension but it's not!

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

4/n @jonny @beneuroscience @axoaxonic @NicoleCRust @MolemanPeter

many different aspects of the task (how is information distributed, dependence between task components, relevance) could be having an impact. The hope for the programme has to be that I could eventually understand task effects as the result of a complex set of interacting variables ( so c) recurs but now in determining the relevant dimensional value of "task" - see turtles all the way down)

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

5/n but I see no reason to believe even that is possible. Ultimately what flexible responding means is an ability to choose strategies. If I'm cold, I might turn up the heating, or close the window, or put on a sweater, or make a hot water bottle, or combinations of those.

Crucially, what strategy is 'best' depends on the context- what's available.

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

6/n 4/n @jonny @beneuroscience @axoaxonic @NicoleCRust @MolemanPeter

that includes things not internal to me. Which is why relations matter, both relations between me and the world, and relations between things in the world. and the world is such that these relations give rise to clumpy, discontinuous interactions. Think of simple violations of independence in preferences:

the choice between banana and chocolate might change if I add an apple - now I can make a fruit salad

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

7/n @jonny @beneuroscience @axoaxonic @NicoleCRust @MolemanPeter
how useful a hammer is, depends on whether I also have nails etc etc...

that's why I think given sufficient cogn. flexibility and a sufficiently rich environment there is no "dimensionalisation' - we live in something like world d) and that's where the programme ends...

NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@UlrikeHahn

Thanks for this! Diving a bit deeper so I understand it, in your example d), isn't the path to insight discovering that what you thought was 1 "process" is actually 2 (using all the tricks of cognitive psychology like double dissociation and such)? Or are different processes not a good way to think about those two (hyper)planes.

UlrikeHahn,
@UlrikeHahn@fediscience.org avatar

@NicoleCRust in principle yes, but I’m assuming many such hyperplanes, not just two…;-)

my intuition is this is fatal in practice not only because we don’t have the resources (or interest) to explore all this, my intuition is that the fact that it’s not ordered means that there’s no systematic way to explore this

in effect everything sort of dissolves into particulars, this is what psychological research in heavily investigated areas tends to feel like and people get frustrated and move on

NicoleCRust,
@NicoleCRust@neuromatch.social avatar

@UlrikeHahn
Got it; thank you. The notion that the mind is not well described by some limited number of processes is a very big shift - fascinating to think about it (and what the alternatives might be).

MolemanPeter,
@MolemanPeter@neuromatch.social avatar

@NicoleCRust @UlrikeHahn @jonny Well, they seem to be the experts.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • Durango
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • JUstTest
  • GTA5RPClips
  • cubers
  • ethstaker
  • normalnudes
  • tacticalgear
  • cisconetworking
  • tester
  • Leos
  • modclub
  • megavids
  • provamag3
  • anitta
  • lostlight
  • All magazines