I sometimes forget that the far right is an anti enlightenment movement.
Perhaps it's my decades of being submerged in the ivory tower of the university or maybe it's just a love of logic and enlightenment principals. I don't know.
But I'm realizing what I find persuasive--evidence that's based on reason--not only fails to persuade the anti-enlightenment crowd, but actually incenses them. They hate reason the way vampires hate sunlight.
Am I the only one who unequivocally thinks that philosophy has made progress? Perhaps I have a different definition of what progress means, but surely I can't be the only one?
The number of theories and arguments that philosophers uncovered, as well as the clarity and breadth of their analyses, is certainly progress.
I’ve noticed a strong alignment between those who think that the computer metaphor for the brain makes little sense and those who’ve thought about how the brain might give rise to emotion.
As much as I love all the progress happening in NeuroAI to push our understanding of perception, memory & intelligence forward, I very much think they are right - there’s a crucial swath that doesn’t seem to fit with that agenda.
That word theory gets thrown around a lot. Some of my colleagues hold it to a really high bar whereas others use it pretty interchangably with hypothesis testing.
There’s an early phase of research that I’m not sure how to label. It’s not so much about levels, but something else. Here’s an example: what would you call the contribution of Copernicus to planetary motion? Ptolomy had these elaborate descriptions of everything revolving around the earth as cycles and epicycles to make up for wonky trajectories, and Copernicus came along and demonstrated that it all becomes a lot simpler if it’s all revolving around the sun. “Theories” of why the planets revolve as they do (Newton’s gravity and Einstein’s bending space time) came later.
Was Copernicus’s contribution a theory, replotting the data in a more sensible way, or something in between? Whatever it was, it was important, and it led to all that followed. But what do we call it (aka how do we regard it)?
"Understanding #Hamas and #Hezbollah as social movements that are #progressive, that are on the left, that are part of a global left, is extremely important. That does not stop us from being critical of certain dimensions of both movements."
@bookstodon I'm currently #reading Beyond Good and Evil by Nietzsche. I must admit I don't find his idea of a universal drive for power compelling.
It relies first and foremost upon an invisible subconscious that we cannot reach or touch inside all of us that tells us to always seek more power. And that anything other than we do is just justification for our power grabs.
My problem with this is how... When you start out looking for it it is easy to say that anything anyone does is a power grab. Getting out of bed is an excercise of our dominion over gravity etc etc..
Does a child draw letters in the sand merely to signal that they have power over sand? Is wearing cool and shiny boots a way of signalling social power over others? Maybe. But in my mind it is unlikely to be the most compelling reason behind everything... Solely because most people do not behave like cartoon villains. Or even cartoon villains trying to camoflage themselves.
I admit. I am only 70 pages in and a very novice philopher at best. But as it stands... I don't think a drive to power as envisioned by Nietzsche is a universal truth.
Robert Nozick, a #libertarian#philosopher, wanted you to imagine that you are offered a sophisticated machine which can perfectly simulate all the things you deem most pleasurable.
He asks you now: would you prefer this to real life?
Nozick assumes you don't & uses that as a justification against (#ethical) hedonists.
What's your position regarding consciousness? Broad physicalism (we'll have a scientific explanation one day) or broad anti-physicalism (science can't give a complete account)? Or something else?
Please repost after voting, I'm genuinely curious! 🤔
Does anyone know a good book or paper about the history of the concept of "nature"?
Is it even true that native communities lived in harmony with nature? Because that needed a distiction between a human and a natural realm. Is "nature" an invention of modernism? How did the concept change over time? Does "nature" need the abolition of the gods? Etc.
Why is the claim: ”Everything is fair and equal because the same rules apply to everyone” a #fallacy?
This cornerstone of modernity has eventually been shown to be false, because it is based on the assumption that humanity experiences ”the world as it is”.
Cognitive & computational neuroscience have strongly pointed toward Kant’s description of human experience to be the correct one: 1/3
When I was a philosophy grad student longtermism hadn't been invented yet. Even now it is, long after I left the field, apparently a fringe area of research. But what I am now reading about it is frankly alarming.
This is an article from a recovered longtermist philosopher. I'll add some quotes and comments below.
Despite its sensationalist pulpy title and #ColdWar premise, Jack Arnold's adaptation of the #RichardMatheson novel is an existentialist treatise.
The Incredible Shrinking Man (1957) plays with the understanding of what it means to be acknowledged as a human, and one's place in the world. The story is told through the eyes of the titular Shrinking Man – Scott Carey – who after being exposed to strange fog, finds himself increasingly lost in this world.
#Consciousness question from someone who read #philosophy stuff on this years ago but nothing from #neuroscience. Is it generally understood to be a memory phenomenon? It seems logically that it must be (see below), but that's not the way the philosophy stuff I have read about it discussed it.
My argument is that the only thing we could be talking about if we're talking consciousness is things that have made their way into a specific memory subsystem (the ones that are accessible to our language systems), otherwise we wouldn't be able to talk about it. Similarly, anything that has made its way into that memory subsystem would also be something we were conscious of. In other words, consciousness is just the set of things that go into that subsystem.
So is consciousness just the study of some particular memory subsystem and the way it interacts with other systems like language? And if we don't understand how memory works, can we understand anything about consciousness?
I just finished an interleaved reading of two books, and wow! did they enhance one another.
The first was an approachable philosophical treatise about how science works given that scientists are human with all their faults. The answer: “evidence”. (Thanks to Jim DiCarlo for this rec + confirmation by @markdhumphries).
The second was a book describing the unfolding of ideas about evolution from Darwin’s tree (mutations + survival of the fittest) through more modern ideas about horizontal gene transfer between species - a perfect illustration of the ideas in the philosophical book but not included in it. (Thanks to @cyrilpedia for this rec).
Books can be complementary in all sorts of ways. Do you know of pairings that are enhanced when thought about together?
I’m just starting a book about the nature of time and ideas about time travel by @JamesGleick. Any thoughts on a good complement for it? (Maybe @JamesGleick) even has suggestions?
In Kandel's 2005 book, he lays out 5 tenants to "outline an intellectual framework designed to align current psychiatric thinking and training of future practitioners with modern biology". I regard this as a good snapshot of the "reductionism" that many quibble with, as it is practiced by the neuroscientists who adopt it. Would anyone disagree (either that this is reductionism or that this is a good snapshot of it?). Here are the principles.
Principle 1. All mental processes derive from operations of the brain.
Principle 2. Genes and their proteins are important determinants of the pattern of interconnections between neurons and how neurons function.
Principle 3. Alerted genes do not, by themselves, explain all of the variance of a given major mental illness. Social or developmental factors also contribute very importantly. Just as combinations of genes contribute to behavior, including social behavior, so can behavior and social factors exert actions on the brain by feeding back upon it to modify the expression of genes and thus the function of nerve cells. Learning, including learning that results in dysfunctional behavior, produces alterations in gene expression. Thus all of "nurture" is ultimately expressed as "nature".
Principle 4. Alternations in gene expression induced by learning give rise to changes in patterns of neuronal connections. These changes ... are responsible for initiating and maintaining abnormalities of behavior that are induced by social contingencies.
Principle 5. Insofar as psychotherapy or counseling is effective and produces long-term changes in behavior, it presumably does so through learning, by producing changes in gene expression ...
Obviously this is a vast oversimplification, but sometimes I feel like there are two kinds of people who get into ethics as a field of research -
Those who are really alarmed by the state of the world and want to see what assumptions we made that got us here, how to turn back, how to do our best to move forward.
Those who are filled with righteous indignation and really want to know who to target their blame and (justified) anger at.
Both of these are good motivations but I also feel like I've also seen people be almost pathological about it (myself included).
The pathology of the former is to have a blindspot to all of the positive and meaningful aspects of common sense morality.
The pathology of the latter is being unreasonably critical of normal people just trying to do their best.
Thought about hypothesis testing as an approach to doing science. Not sure if new, would be interested if it's already been discussed. Basically, hypothesis testing is inefficient because you can only get 1 bit of information per experiment at most.
In practice, much less on average. If the hypothesis is not rejected you get close to 0 bits, and if it is rejected it's not even 1 bit because there's a chance the experiment is wrong.
One way to think about this is error signals. In machine learning we do much better if we can have a gradient than just a correct/false signal. How do you design science to maximise the information content of the error signal?
In modelling I think you can partly do that by conducting detailed parameters sweeps and model comparisons. More generally, I think you want to maximise the gain in "understanding" the model behaviour, in some sense.
This is very different to using a model to fit existing data (0 bits per study) or make a prediction (at most 1 bit per model+experiment). I think it might be more compatible with thinking of modelling as conceptual play.
I feel like both experimentalists and modellers do this when given the freedom to do so, but when they impose a particular philosophy of hypothesis testing on each other (grant and publication review), this gets lost.
Incidentally this is also exactly the problem with our traditional publication system that only gives you 1 bit of information about a paper (that it was accepted), rather than giving a richer, open system of peer feedback.