thomasfuchs,
@thomasfuchs@hachyderm.io avatar

The weirdest thing about the AI hype is the claims that computers will become sentient and human-level intelligent.

There’s zero evidence that the human brain (the only thing that we know of that is capable of human-level sentience and intelligence) works like a computer or that it could even be simulated by one.

Some neuroscience points to quantum mechanical effects being at the core of how neurons work, which basically means that we don’t have a clue how the brain really works.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

The “some tech bros with a pile of graphics cards left over from crypto mining will built a brain in their garage” narrative is laughable.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

But this is exactly what the big tech companies are trying to sell—and they’re succeeding in befuddling people who don’t have the domain knowledge to see through this utter bullshit.

maralorn,
@maralorn@chaos.social avatar

@thomasfuchs

  1. There is a huge difference between sentience and intelligence. If something kills me I don’t care if was sentient if it was smart enough.

  2. Saying that we don’t understand something because "quantum" makes no sense. Actually, modern computers work on quantum effects.

  3. I am not aware of a scientific argument that anything physical (like a brain) cannot be simulated by a computer.

That being said, it might still be super hard and long of, so I am also sceptical of the hype.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@maralorn I'm not technobabbling this, google for "computational neuroscience" and be in awe of all the articles about quantum mechanics and brains.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@maralorn If brains use quantum mechanics in their core operation (and research points that this is a possibility), it will be literally impossible to simulate their behavior with a classic computer (you will need a quantum computer).

maralorn,
@maralorn@chaos.social avatar

@thomasfuchs Wondering if this is a hype by itself.^^

But if there is really no adequate classical approximation for how our brain works and quantum effects are fundamental to its operation that would truly be interesting.

(Just hope that no one thinks that it solves their free will problem for them.)

Would still leave the question whether some level of "human-like intelligence" can only be implemented by exactly simulating a human brain.

katp32,
@katp32@mastodon.social avatar

@thomasfuchs come on. you know that's not how this works. "quantum effects" is handwavy pseudoscience but also irrelevant as proved by my dear friend Alan Turing.

is an LLM capable of human intelligence? no. are humans intrinsically special and impossible to simulate without handwavy unscientific bullshit like "souls"? no. that's absurd.

a sufficiently large computer can simulate any physical process. humans are, in fact, based in physics, not magic.

katp32,
@katp32@mastodon.social avatar

@thomasfuchs and no, you cannot handwave away physics with "<something something quantum physics>". quantum physics is physics, not magic. even if human brains do work using "quantum physics", that doesn't mean it can't be simulated by a Turing-complete system because, again, it's physics, not magic.

quantum computers, too, can be emulated with classical computing. actual quantum computers are just faster, they aren't magic, they aren't more powerful.

https://xkcd.com/1240/

katp32,
@katp32@mastodon.social avatar

@thomasfuchs and yes, we don't fully understand the human brain, but that doesn't mean physics doesn't apply. that's the same logic people use wrt perpetual motion machines and such nonsense; "oh, we don't fully understand it, that means anything's possible".

no. it does not mean anything is possible. nor is there any reason whatsoever to believe human brains are in any way special. they're bags of chemicals. really complex bags of chemicals, but still bags of chemicals, not magic.

prema,
@prema@hachyderm.io avatar

@katp32 @thomasfuchs i generally agree, but simulating complex quantum physics (manybody systems) is damn computationally expensive, and typically not done with machine learning. This is the disonance.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@prema @katp32 My point here is that there's promises made by the industry that can't possibly hold true.

Currently no one understands how brains work--yet techbros are promising to simulate them "in a year or two".

There's no evidence that you can do this with software; because you can't prove something when you don't know what it is you're proving.

maegul,
@maegul@hachyderm.io avatar

@thomasfuchs

As with economics/finance and crypto, AI is driven by the childish hubris of the tech sector to think they're above all of the other fields of expertise.

Otherwise, what strikes me is the urge to rush into an obvious ethics problem. Once the machine is near human sentient/AGI, then ethics dictates you have to be humane to it, which is not what capitalism wants from its machines (see Human History™).

Thad,
@Thad@brontosin.space avatar

@thomasfuchs It's utterly bizarre to me that a person who is actually in possession of a human brain could misunderstand it so badly as to think Spicy Autocomplete is anything like it.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@Thad I think this has to do with evolutionary traits of humans—most people are fine with the first plausible explanation or solution for a given problem; this saves energy (and food) because brain uses a lot of energy when really thinking about something.

On average this is likely a successful long-term strategy for the species.

Thad,
@Thad@brontosin.space avatar

@thomasfuchs That's part of it but I've heard it from people who really should know better, people who at least have some technical understanding of how LLMs work.

I've had people tell me with a straight face that all our brains do is match patterns, and it kinda baffles me how somebody could fail so badly to understand their own thought processes. (Let alone, y'know, various autonomic processes that are entirely separate from conscious thought.)

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@Thad Anyone who tells you how our brains work is automatically disqualified from serious discussion, because literally no one knows including the world's most accomplished neuroscientists.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@Thad It's a great example of hubris in computer science and tech-broism.

Thad,
@Thad@brontosin.space avatar

@thomasfuchs That and these discussions start to feel more like religious debates at a certain point. A lot of folks seem to be taking the inevitability of general AI on faith and working backwards to try to justify it.

thomasfuchs,
@thomasfuchs@hachyderm.io avatar

@Thad yup.

I'm open to discussing on actual science merits, but rn people are just trumpeting their opinions as facts.

TheJen,
@TheJen@beige.party avatar

@thomasfuchs @Thad We also tend to anthropomorphise EVERYTHING, so why would this be any different? Especially now when a chat bot can pass the Turing test for most people.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • PowerRangers
  • DreamBathrooms
  • everett
  • magazineikmin
  • InstantRegret
  • ngwrru68w68
  • Youngstown
  • Durango
  • slotface
  • rosin
  • GTA5RPClips
  • modclub
  • kavyap
  • thenastyranch
  • anitta
  • khanakhh
  • osvaldo12
  • ethstaker
  • vwfavf
  • mdbf
  • tester
  • tacticalgear
  • Leos
  • cisconetworking
  • cubers
  • normalnudes
  • provamag3
  • megavids
  • All magazines