NaibofTabr,

Even if it were possible to scan the contents of your brain and reproduce them in a digital form, there’s no reason that scan would be anything more than bits of data on the digital system. You could have a database of your brain… but it wouldn’t be conscious.

No one has any idea how to replicate the activity of the brain. As far as I know there aren’t any practical proposals in this area. All we have are vague theories about what might be going on, and a limited grasp of neurochemistry. It will be a very long time before reproducing the functions of a conscious mind is anything more than fantasy.

Gabu,

You could have a database of your brain… but it wouldn’t be conscious.

Where is the proof of your statement?

NaibofTabr,

Well there’s no proof, it’s all speculative and even the concept of scanning all the information in a human brain is fantasy so there isn’t going to be a real answer for awhile.

But just as a conceptual argument, how do you figure that a one-time brain scan would be able to replicate active processes that occur over time? Or would you expect the brain scan to be done over the course of a year or something like that?

intensely_human,

You make a functional model of a neuron that can behave over time like other neurons do. Then you get all the synapses and their weights. The synapses and their weights are a starting point, and your neural model is the function that produces subsequent states.

Problem is brians don’t have “clock cycles”, at least not as strictly as artificial neural networks do.

Maggoty,

I think we’re going to learn how to mimic a transfer of consciousness before we learn how to actually do one. Basically we’ll figure out how to boot up a new brain with all of your memories intact. But that’s not actually a transfer, that’s a clone. How many millions of people will we murder before we find out the Zombie Zuckerberg Corp was lying about it being a transfer?

explodicle,

What’s the difference between the two?

Maggoty,

A. You die and a copy exists

B. You move into a new body

explodicle,

Right, how is moving into a new body not dying?

Maggoty,

In one scenario you continue. In the other you die but observers think you continue because it’s a copy of you.

intensely_human,

Why would bits not be conscious?

Sombyr,

We don’t even know what consciousness is, let alone if it’s technically “real” (as in physical in any way.) It’s perfectly possible an uploaded brain would be just as conscious as a real brain because there was no physical thing making us conscious, and rather it was just a result of our ability to think at all.
Similarly, I’ve heard people argue a machine couldn’t feel emotions because it doesn’t have the physical parts of the brain that allow that, so it could only ever simulate them. That argument has the same hole in that we don’t actually know that we need those to feel emotions, or if the final result is all that matters. If we replaced the whole “this happens, release this hormone to cause these changes in behavior and physical function” with a simple statement that said “this happened, change behavior and function,” maybe there isn’t really enough of a difference to call one simulated and the other real. Just different ways of achieving the same result.

My point is, we treat all these things, consciousness, emotions, etc, like they’re special things that can’t be replicated, but we have no evidence to suggest this. It’s basically the scientific equivalent of mysticism, like the insistence that free will must exist even though all evidence points to the contrary.

merc,

Also, some of what happens in the brain is just storytelling. Like, when the doctor hits your patellar tendon, just under your knee, with a reflex hammer. Your knee jerks, but the signals telling it to do that don’t even make it to the brain. Instead the signal gets to your spinal cord and it “instructs” your knee muscles.

But, they’ve studied similar things and have found out that in many cases where the brain isn’t involved in making a decision, the brain does make up a story that explains why you did something, to make it seem like it was a decision, not merely a reaction to stimulus.

intensely_human,

That seems like a lot of wasted energy, to produce that illusion. Doesn’t nature select out wasteful designs ruthlessly?

wols,

TLDR:
Nature can’t simply select out consciousness because it emerges from hardware that is useful in other ways. The brain doesn’t waste energy on consciousness, it uses energy for computation, which is useful in a myriad ways.

The usefulness of consciousness from an evolutionary fitness perspective is a tricky question to answer in general terms. An easy intuition might be to look at the utility of pain for the survival of an individual.

I personally think that, ultimately, consciousness is a byproduct of a complex brain. The evolutionary advantage is mainly given by other features enabled by said complexity (generally more sophisticated and adaptable behavior, social interactions, memory, communication, intentional environment manipulation, etc.) and consciousness basically gets a free ride on that already-useful brain.
Species with more complex brains have an easier time adapting to changes in their environment because their brains allow them to change their behavior much faster than random genetic mutations would. This opens up many new ecological niches that simpler organisms wouldn’t be able to fill.

I don’t think nature selects out waste. As long as a species is able to proliferate its genes, it can be as wasteful as it “wants”. It only has to be fit enough, not as fit as possible. E.g. if there’s enough energy available to sustain a complex brain, there’s no pressure to make it more economical by simplifying its function. (And there are many pressures that can be reacted to without mutation when you have a complex brain, so I would guess that, on the whole, evolution in the direction of simpler brains requires stronger pressures than other adaptations)

merc,

Yeah. This is related to supernatural beliefs. If the grass moves it might just be a gust of wind, or it might be a snake. Even if snakes are rare, it’s better to be safe than sorry. But, that eventually leads to assuming that the drought is the result of an angry god, and not just some random natural phenomenon.

So, brains are hard-wired to look for causes, even inventing supernatural causes, because it helps avoid snakes.

arendjr,

let alone if it’s technically “real” (as in physical in any way.)

This right here might already be a flaw in your argument. Something doesn’t need to be physical to be real. In fact, there’s scientific evidence that physical reality itself is an illusion created through observation. That implies (although it cannot prove) that consciousness may be a higher construct that exists outside of physical reality itself.

If you’re interested in the philosophical questions this raises, there’s a great summary article that was published in Nature: www.nature.com/articles/436029a

Gabu,

That’s pseudoscientific bullshit. Quantum physics absolutely does tell us that there is a real physical world. It’s incredibly counterintuitive and impossible to fully describe, but does exist.

NaibofTabr,

Heh, well… I guess that depends on how you define “physical”… if quantum field theory is correct then everything we experience is the product of fluctuations in various fields, including the physical mass of protons, neutrons etc. “Reality” as we experience it might be more of an emergent property, as illusory as the apparent solidity of matter.

intensely_human,

Physical reality exists inside consciousness. Consciousness is the thing that can be directly observed.

Sombyr,

On the contrary, it’s not a flaw in my argument, it is my argument. I’m saying we can’t be sure a machine could not be conscious because we don’t know that our brain is what makes us conscious. Nor do we know where the threshold is where consciousness arises. It’s perfectly possible all we need is to upload an exact copy of our brain into a machine, and it’d be conscious by default.

arendjr,

I see that’s certainly a different way of looking at it :) Of course I can’t say with any authority that it must be wrong, but I think it’s a flaw because it seems you’re presuming that consciousness arises from physical properties. If the physical act of copying a brain’s data were to give rise to consciousness, that would imply consciousness is a product of physical reality. But my position (and that of the paper I linked) is that physical reality is a product of mental consciousness.

Gabu, (edited )

That’s based on a pseudoscientific interpretation of quantum physics not related to actual physics.

JStenoien,

It’s not a flaw to not be batshit like you.

arendjr,

Do elaborate on the batshit part :) It’s a scientific fact that physical matter does not exist in its physical form when unobserved. This may not prove the existence of consciousness, but it certainly makes it plausible. It certainly invalidates physical reality as the “source of truth”, so to say. Which makes the explanation that physical reality is a product of consciousness not just plausible, but more likely than the other way around. Again, not a proof, but far from batshit.

Gabu,

It’s a scientific fact that physical matter does not exist in its physical form when unobserved.

No, it’s not. The quantum field and the quantum wave exist whether or not you observe it, only the particle behavior changes based on interaction. Note how I specifically used the word “interaction”, not “observation”, because that’s what a quantum physicist means when they say the wave-particle duality depends on the observer. They mean that a wave function collapses once it interacts definitely, not only when a person looks at it.

It certainly invalidates physical reality as the “source of truth”, so to say

How so, when the interpretation you’re citing is specifically dependant on the mechanics of quantum field fluctuation? How can physical reality not exist when it is physical reality that gives you the means to (badly) justify your hypothesis?

Sombyr,

I think you’re a little confused about what observed means and what it does.
When unobserved, elementary particles behave like a wave, but they do not stop existing. A wave is still a physical thing. Additionally, observation does not require consciousness. For instance, a building, such as a house, when nobody is looking at it, does not begin to behave like a wave. It’s still a physical building. Therefore, observation is a bit of a misnomer. It really means a complex interaction we don’t understand causes particles to behave like a particle and not a wave. It just happens that human observation is one of the possible ways this interaction can take place.
An unobserved black hole will still feed, an unobserved house is still a house.
To be clear, I’m not insulting you or your idea like the other dude, but I wanted to clear that up.

arendjr, (edited )

Thanks, that seems a fair approach, although it doesn’t have me entirely convinced yet. Can you explain what the physical form of a wave function is? Because it’s not like a wave, such as waves in the sea. It’s really a wave function, an abstract representation of probabilities which in my understanding does not have any physical representation.

You say the building does not start acting like a wave, and you’re right, that would be silly. But it does enter into a superposition where the building can be either collapsed or not. Like Schreudinger’s cat, which can be dead or alive, and will be in a superposition of both until observation happens again. And yes, the probabilities of this superposition are indeed expressed through the wave function, even though there is no physical wave.

It’s true observation does not require consciousness. But until we know what does constitute observation, I believe consciousness provides a plausible explanation.

Sombyr,

A building does not actually enter a superposition when unobserved, nor does Schrodinger’s cat. The point of that metaphor was to demonstrate, through humor, the difference between quantum objects and non-quantum objects, by pointing out how ridiculous it would be to think a cat could enter a superposition like a particle. In fact, one of the great mysteries of physics right now is why only quantum objects have that property, and in order to figure that out we have to figure out what interaction “observation” actually is.
Additionally, we can observe the effects of waves quite clearly. We can observe how they interact with things, how they interfere with each other, etc. It is only attempting to view the particle itself that causes it to collapse and become a particle and not a wave. We can view, for instance, the interference pattern of photons of light, behaving like a wave. This proves that the wave is in fact real, because we can see the effects of it. It’s only if we try to observe the paths of the individual photons that the pattern changes. We didn’t make the photons real, we could already see they were real by their effects on reality. We just collapsed the function, forcing them to take a single path.

arendjr,

In fact, one of the great mysteries of physics right now is why only quantum objects have that property, and in order to figure that out we have to figure out what interaction “observation” actually is.

This does not stroke with my understanding of quantum physics. As far as we know there is no clear distinction between “quantum objects” vs “non-quantum objects”. The double slit experiment has been reproduced with molecules as large as 114 atoms, and there seems no reason to believe that would be the upper limit: livescience.com/19268-quantum-double-slit-experim…

This proves that the wave is in fact real, because we can see the effects of it.

The only part that’s proven is the interference pattern. So yes, we know it acts like a wave in that particular sense. But that’s not the same thing as saying it is a wave in the physical sense. A wave in the classic physical sense doesn’t collapse upon observation. I know it’s real in an abstract sense. I’m just questioning the physical nature of that reality.

Sombyr,

There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties? Nobody knows, all we know is they don’t. We’ve attempted to figure it out by creating larger and larger objects that still exhibit quantum properties, but we know, at some point, it just stops exhibiting these properties and we don’t know why, but it doesn’t require an observer to collapse the wave function.
Also, can you define physical for me? It seems we have a misunderstanding here, because I’m defining physical as having a tangible effect on reality. If it wasn’t real, it could not interact with reality. It seems you’re using a different definition.

arendjr,

can you define physical for me?

The distinction I tend to make is between physical using the classical definition of physics (where everything is made of particles basically) and the quantum mechanical physics which defies “physical” in the classical sense. So far we’ve only been able to scientifically witness quantum physics in small particles, but as you say, there’s no reason it can’t apply at a macro scale, just… we don’t know how to witness it, if possible.

it doesn’t require an observer to collapse the wave function

Or maybe it does? The explanation I have for us being unable to apply the experiments at a larger scale is that as we scale things up, it becomes harder and harder to avoid accidental observation that would taint the experiment. But that’s really no more than a hunch/gut feeling. I would have no idea how to prove that 😅

Sombyr,

I see, so your definition of “physical” is “made of particles?” In that case, sorta yeah. Particles behave as waves when unobserved, so you could argue that they no longer qualify as particles, and therefore, by your definition, are not physical. But that kinda misses the point, right? Like, all that means is that the observation may have created the particle, not that the observation created reality, because reality is not all particles. Energy, for instance, is not all particles, but it can be. Quantum fields are not particles, but they can give rise to them. Both those things are clearly real, but they aren’t made of particles.
On the second point, that’s kinda trespassing out of science territory and into “if a tree falls in the forest” territory. We can’t prove that a truly unobserved macroscopic object wouldn’t display quantum properties if we just didn’t check if it was, but that’s kinda a useless thing to think about. It’s kinda similar to what our theories are though, in that the best theory we have is that the bigger the object is, the more likely the interaction we call “observation” just happens spontaneously without the need for interaction. Too big, and it’s so unlikely in any moment for it not to happen that the chances of the wave function not being collapsed in any given moment is so close to zero there’s no meaningful distinction between the actual odds and zero.

arendjr,

Agreed on all counts, except it being useless to think about :) It’s only useless if you dismiss philosophy as interesting altogether.

But that kinda misses the point, right? Like, all that means is that the observation may have created the particle, not that the observation created reality, because reality is not all particles.

I guess that depends on the point being made. You didn’t raise this argument, but I often see people arguing that the universe is deterministic and therefore we cannot have free will. But the quantum mechanical reality is probabilistic, which does leave room for things such as free will.

I can agree with your view to say observation doesn’t create reality, but then it does still affect it by collapsing the wave function. It’s a meaningful distinction to make in a discussion about consciousness, since it leaves open the possibility that our consciousness is not merely an emergent property of complex interaction that has an illusion of free will, but that it may actually be an agent of free will.

And yes, I fully recognise this enters into the philosophical realm and there is no science to support these claims. I’m merely arguing that science leaves open a path that enters that realm, and from there it is up to us to make sense of it.

There is the philosophical adage “I think therefore I am”, which I do adhere to. I know I am, so I’ll consider as flawed any reasoning that says I’m not. Maybe that just makes me a particularly stubborn scientific curiosity, but I like to think I’m more than that :)

Gabu,

Because it’s not like a wave, such as waves in the sea.

Actually, it is. It’s the same meaning we’ve had for waves in physics since the first time someone figured how to plot a 2d graph. Only the medium is a quantum field instead of water, its amplitude is probabilistic instead of height, and instead of time we have some other property of distributions, usually space-time.

NaibofTabr,

So this video is a pretty good explanation of quantum field theory.

Like Schreudinger’s cat, which can be dead or alive, and will be in a superposition of both until observation happens again.

This idea is based on a misunderstanding of what Schrödinger actually said. The concept of the cat existing in a superposition state was not meant to be taken literally and is not an example of anything that is currently believed to be true about the physical universe.

NaibofTabr,

The problem with this is that even if a machine is conscious, there’s no reason it would be conscious like us. I fully agree that consciousness could take many forms, probably infinite forms - and there’s no reason to expect that one form would be functionally or technically compatible with another.

What does the idea “exact copy of our brain” mean to you? Would it involve emulating the physical structure of a human brain? Would it attempt to abstract the brain’s operations from the physical structure? Would it be a collection of electrical potentials? Simulations of the behavior of specific neurochemicals? What would it be in practice, that would not be hand-wavy fantasy?

intensely_human,

We make a giant theme park where people can interact with androids. Then we make a practically infinite number of copies of this theme park. We put androids in the copies and keep providing feedback to alter their behavior until they behave exactly like the people in the theme park.

Sombyr,

I suppose I was overly vague about what I meant by “exact copy.” I mean all of the knowledge, memories, and an exact map of the state of our neurons at the time of upload being uploaded to a computer, and then the functions being simulated from there. Many people believe that even if we could simulate it so perfectly that it matched a human brain’s functions exactly, it still wouldn’t be conscious because it’s still not a real human brain. That’s the point I was arguing against. My argument was that if we could mimic human brain functions closely enough, there’s no reason to believe the brain is so special that a simulation could not achieve consciousness too.
And you’re right, it may not be conscious in the same way. We have no reason to believe either way that it would or wouldn’t be, because the only thing we can actually verify is conscious is ourself. Not humans in general, just you, individually. Therefore, how conscious something is is more of a philosophical debate than a scientific one because we simply cannot test if it’s true. We couldn’t even test if it was conscious at all, and my point wasn’t that it would be, my point is that we have no reason to believe it’s possible or impossible.

intensely_human,

Unfortunately the physics underlying brain function are chaotic systems, meaning infinite (or “maximum”) precision is required to ensure two systems evolve to the same later states.

That level of precision cannot be achieved in measuring the state, without altering the state into something unknown after the moment of measurement.

Nothing quantum is necessary for this inability to determine state. Consider the problem of trying to map out where the eight ball is on a pool table, but you can’t see the eight ball. All you can do is throw other balls at it and observe how their velocities change. Now imagine you can’t see those balls either, because the sensing mechanism you’re using is composed of balls of equal or greater size.

Unsolvable problem. Like a box trying to contain itself.

Blue_Morpho,

Chaos comes into play as a state changes. The poster above you talks about copying the state. Once copied the two states will diverge because of chaos. But that doesn’t preclude consciousness. It means the copy will soon have different thoughts.

intensely_human,

Correct

alvvayson,

deleted_by_author

  • Loading...
  • BestBouclettes,

    ChatGPT is not conscious, it’s just a probability language model. What it says makes no sense to it and it has no sense of anything. That might change in the future but currently it’s not.

    ricdeh,
    @ricdeh@lemmy.world avatar

    Dumbed down, your brain is also just a probability model.

    Blue_Morpho,

    That reads like something ChatGPT wrote.

    BestBouclettes,

    Blip blop beep. I SWEAR I AM A HUMAN BEING MADE OF HUMAN FLESH.

    h3ndrik, (edited )

    And it doesn’t have any internal state of mind. It can’t “remember” or learn anything from experience. You need to always feed everything into the context or stop and retrain it to incorporate “experiences”. So I’d say that rules out consciousness without further systems extending it.

    merc,

    Also, actual brains arise from desires / needs. Brains got bigger to accommodate planning and predicting.

    When a human generates text, the fundamental reason for doing so is to fulfill some desire or need. When an LLM generates text it’s because the program says to generate the next word, then the next, then the next, based on a certain probability of words appearing in a certain order.

    If an LLM writes text that appears to be helpful, it’s not doing it out of a desire to be helpful. It’s doing it because it’s been trained on tons of text in which someone was being helpful, and it’s mindlessly mimicking that behaviour.

    h3ndrik, (edited )

    Isn’t the reward function in reinforcement learning something like a desire it has? I mean training works because we give it some function to minimize/maximize… A goal that it strives for?! Sure it’s a mathematical way of doing it and in no way as complex as the different and sometimes conflicting desires and goals I have as a human… But nonetheless I think I’d consider this as a desire and a reason to do something at all, or machine learning wouldn’t work in the first place.

    merc,

    The reward function for an LLM is about generating a next word that is reasonable. It’s like a road-building robot that’s rewarded for each millimeter of road built, but has no intention to connect cities or anything. It doesn’t understand what cities are. It doesn’t even understand what a road is. It just knows how to incrementally add another millimeter of gravel and asphalt that an outside observer would call a road.

    If it happens to connect cities it’s because a lot of the roads it was trained on connect cities. But, if its training data also happens to contain a NASCAR oval, it might end up building a NASCAR oval instead of a road between cities.

    h3ndrik, (edited )

    That is an interesting analogy. In the real world it’s kinda similar. The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so. And it happens to be their job to build roads. Their desire is probably to get through the day and earn a decent living. And further along the chain, not even their boss nor the city engineer necessarily “wants” the road to go in a certain direction.

    Talking about large language models instead of simpler forms of machine learning makes it a bit complicated. Since it’s and elaborate trick. Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world. The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

    I’m not that sure. Sure the weights of an ML model in itself don’t have any desire. They’re just numbers. But we have more than that. We give it a prompt, build chatbots and agents around the models. And these are more complex systems with the capability to do something. Like do (simple) customer support or answer questions. And in the end we incentivise them to do their job as we want, albeit in a crude and indirect way.

    And maybe this is skipping half of the story and directly jumping to philosophy… But we as humans might be machines, too. And what we call desires is a result from simpler processes that drive us. For example surviving. And wanting to feel pleasure instead of pain. What we do on a daily basis kind of emerges from that and our reasoning capabilities.

    It’s kind of difficult to argue. Because everything also happens within a context. The world around us shapes us and at the same time we’re part of bigger dynamics and also shape our world. And large language models or the whole chatbot/agent are pretty simplistic things. They can just do text and images. They don’t have conciousness or the ability to remember/learn/grow with every interaction, as we do. And they do simple, singular tasks (as of now) and aren’t completely embedded in a super complex world.

    But I’d say that an LLM answers a question correctly (which it can do) and why it does it due to the way supervised learning works… And the road construction worker building the road towards the other city and how that relates to his basic instincts as a human… Are kind of similar concepts. They’re both results of simpler mechanisms that are also completely unrelated to the goal the whole entity is working towards. (I mean not directly related… I.e. needing money to pay for groceries and paving the road.)

    I hope this makes some sense…

    merc,

    The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so.

    But, the construction workers aren’t the ones who designed the road. They’re just building some small part of it. In the LLM case that might be like an editor who is supposed to go over the text to verify the punctuation is correct, but nothing else. But, the LLM is the author of the entire text. So, it’s not like a construction worker building some tiny section of a road, it’s like the civil engineer who designed the entire highway.

    Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world

    No, it doesn’t. They learn nothing. They’re simply able to generate text that looks like the text generated by people who do know math. They certainly don’t know any concepts. You can see that by how badly they fail when you ask them to do simple calculations. They quickly start generating text that looks like it contains fundamental mistakes, because they’re not actually doing math or anything, they’re just generating plausible next words.

    The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

    No, there’s no intelligence, no reasoning. The can fool humans into thinking there’s intelligence there, but that’s like a scarecrow convincing a crow that there’s a human or human-like creature out in the field.

    But we as humans might be machines, too

    We are meat machines, but we’re meat machines that evolved to reproduce. That means a need / desire to get food, shelter, and eventually mate. Those drives hook up to the brain to enable long and short term planning to achieve those goals. We don’t generate language its own sake, but instead in pursuit of a goal. An LLM doesn’t have that. It merely generates plausible words. There’s no underlying drive. It’s more a scarecrow than a human.

    h3ndrik, (edited )

    Hmm. I’m not really sure where to go with this conversation. That contradicts what I’ve learned in undergraduate computer science about machine learning. And what seems to be consensus in science… But I’m also not a CS teacher.

    We deliberately choose model size, training parameters and implement some trickery to prevent the model from simply memorizing things. That is to force it to form models about concepts. And that is what we want and what makes machine learning interesting/usable in the first place. You can see that by asking them to apply their knowledge to something they haven’t seen before. And we can look a bit inside at the vectors, activations and stuff. For example a cat is closer related to a dog than to a tractor. And it has learned the rough concept of cat, its attributes and so on. It knows that it’s an animal, has fur, maybe has a gender. That the concept “software update” doesn’t apply to a cat. This is a model of the world the AI has developed. They learn all of that and people regularly probe them and find out they do.

    Doing maths with an LLM is silly. Using an expensive computer to do billions of calculations to maybe get a result that could be done by a calculator, or 10 CPU cycles on any computer is just wasting energy and money. And it’s a good chance that it’ll make something up. That’s correct. And a side-effect of intended behaviour. However… It seems to have memorized it’s multiplication tables. And I remember reading a paper specifically about LLMs and how they’ve developed concepts of some small numbers/amounts. There are certain parts that get activated that form a concept of small amounts. Like what 2 apples are. Or five of them. As I remember it just works for very small amounts. And it wasn’t straightworward but had weir quirks. But it’s there. Unfortunately I can’t find that source anymore or I’d include it. But there’s more science.

    And I totally agree that predicting token by token is how LLMs work. But how they work and what they can do are two very different things. More complicated things like learning and “intelligence” emerge from those more simple processes. And they’re just a means of doing something. It’s consensus in science that ML can learn and form models. It’s also kind of in the name of machine learning. You’re right that it’s very different from what and how we learn. And there are limitations due to the way LLMs work. But learning and “intelligence” (with a fitting definition) is something all AI does. LLMs just can’t learn from interacting with the world (it needs to be stopped and re-trained on a big computer for that) and it doesn’t have any “state of mind”. And it can’t think backwards or do other things that aren’t possible by generating token after token. But there isn’t any comprehensive study on which tasks are and aren’t possible with this way of “thinking”. At least not that I’m aware of.

    (And as a sidenote: “Coming up with (wrong) things” is something we want. I type in a question and want it to come up with a text that answers it. Sometimes I want creative ideas. Sometimes it shouldn’t tell the truth and not be creative with that. And sometimes we want it to lie or not tell the truth. Like in every prompt of any commercial product that instructs it not to tell those internal instructions to the user. We definitely want all of that. But we still need to figure out a good way to guide it. For example not to get too creative with simple maths.)

    So I’d say LLMs are limited in what they can do. And I’m not at all believing Elon Musk. I’d say it’s still not clear if that approach can bring us AGI. I have some doubts whether that’s possible at all. But narrow AI? Sure. We see it learn and do some tasks. It can learn and connect facts and apply them. Generally speaking, LLMs are in fact an elaborate form of autocomplete. But i the process they learned concepts and something alike reasoning skills and a form of simple intelligence. Being fancy autocomplete doesn’t rule that out and we can see it happening. And it is unclear whether fancy autocomplete is all you need for AGI.

    merc,

    That is to force it to form models about concepts.

    It can’t make models about concepts. It can only make models about what words tend to follow other words. It has no understanding of the underlying concepts.

    You can see that by asking them to apply their knowledge to something they haven’t seen before

    That can’t happen because they don’t have knowledge, they only have sequences of words.

    For example a cat is closer related to a dog than to a tractor.

    The only way ML models “understand” that is in terms of words or pixels. When they’re generating text related to cats, the words they’re generating are closer to the words related to dogs than the words related to tractors. When dealing with images, it’s the same basic idea. But, there’s no understanding there. They don’t get that cats and dogs are related.

    This is fundamentally different from how human minds work, where a baby learns that cats and dogs are similar before ever having a name for either of them.

    h3ndrik, (edited )

    I’m sorry. Now it gets completely false…

    Read the first paragraph of the Wikipedia article on machine learning or the introduction of any of the literature on the subject. The “generalization” includes that model building capability. They go a bit into detail later. They specifically mention “to unseen data”. And “leaning” is also there. I don’t think the Wikipedia article is particularly good in explaining it, but at least the first sentences lay down what it’s about.

    And what do you think language and words are for? To transport information. There is semantics… Words have meanings. They name things, abstract and concrete concepts. The word “hungry” isn’t just a funny accumulation of lines and arcs, which statistically get followed by other specific lines and arcs… There is more to it. (a meaning.)

    And this is what makes language useful. And the generalization and prediction capabilities is what makes ML useful.

    How do you learn as a human when not from words? I mean there are a few other posibilities. But an efficient way is to use language. You sit in school or uni and someone in the front of the room speaks a lot of words… You read books and they also contain words?! And language is super useful. A lion mother also teaches their cubs how to hunt, without words. But humans have language and it’s really a step up what we can pass down to following generations. We record knowledge in books, can talk about abstract concepts, feelings, ethics, theoretical concepts. We can write down how gravity and physics and nature works, just with words. That’s all possible with language.

    I can look it up if there is a good article explaining how learning concepts works and why that’s the fundamental thing that makes machine learning a field in science… I mean ultimately I’m not a science teacher… And my literature is all in German and I returned them to the library a long time ago. Maybe I can find something.

    Are you by any chance familiar with the concept of embeddings, or vector databases? I think that showcases that it’s not just letters and words in the models. These vectors / embeddings that the input gets converted to, match concepts. They point at the concept of “cat” or “presidential speech”. And you can query these databases. Point at “presidential speech” and find a representation of it in that area. Store the speech with that key and find it later on by querying it what obama said at his inauguration… That’s oversimplified but maybe that visualizes it a bit more that it’s not just letters of words in the models, but the actual meanings that get stored. Words get converted into an (multidimensional) vector space and it operates there. These word representations are called “embeddings” and transformer models which is the current architecture for large language models, use these word embeddings.

    merc,

    The “learning” in a LLM is statistical information on sequences of words. There’s no learning of concepts or generalization.

    And what do you think language and words are for? To transport information.

    Yes, and humans used words for that and wrote it all down. Then a LLM came along, was force-fed all those words, and was able to imitate that by using big enough data sets. It’s like a parrot imitating the sound of someone’s voice. It can do it convincingly, but it has no concept of the content it’s using.

    How do you learn as a human when not from words?

    The words are merely the context for the learning for a human. If someone says “Don’t touch the stove, it’s hot” the important context is the stove, the pain of touching it, etc. If you feed an LLM 1000 scenarios involving the phrase “Don’t touch the stove, it’s hot”, it may be able to create unique dialogues containing those words, but it doesn’t actually understand pain or heat.

    We record knowledge in books, can talk about abstract concepts

    Yes, and those books are only useful for someone who has a lifetime of experience to be able to understand the concepts in the books. An LLM has no context, it can merely generate plausible books.

    Think of it this way. Say there’s a culture where instead of the written word, people wrote down history by weaving fabrics. When there was a death they’d make a certain pattern, when there was a war they’d use another pattern. A new birth would be shown with yet another pattern. A good harvest is yet another one, and so-on.

    Thousands of rugs from that culture are shipped to some guy in Europe, and he spends years studying them. He sees that pattern X often follows pattern Y, and that pattern Z only ever seems to appear following patterns R, S and T. After a while, he makes a fabric, and it’s shipped back to the people who originally made the weaves. They read a story of a great battle followed by lots of deaths, but surprisingly there followed great new births and years of great harvests. They figure that this stranger must understand how their system of recording events works. In reality, all it was was an imitation of the art he saw with no understanding of the meaning at all.

    That’s what’s happening with LLMs, but some people are dumb enough to believe there’s intention hidden in there.

    h3ndrik,

    people wrote down history by weaving fabric […]

    Hmm. I think in philosophy that thought experiment is known as chinese room

    merc,

    Yeah, that’s basically the idea I was expressing.

    Except, the original idea is about “Understanding Chinese”, which is a bit vague. You could argue that right now the best translation programs “understand chinese”, at least enough to translate between Chinese and English. That is, they understand the rules of Chinese when it comes to subjects, verbs, objects, adverbs, adjectives, etc.

    The question is now whether they understand the concepts they’re translating.

    Like, imagine the Chinese government wanted to modify the program so that it was forbidden to talk about subjects that the Chinese government considered off-limits. I don’t think any current LLM could do that, because doing that requires understanding concepts. Sure, you could ban key words, but as attempts at Chinese censorship have shown over the years, people work around word bans all the time.

    That doesn’t mean that some future system won’t be able to understand concepts. It may have an LLM grafted onto it as a way to communicate with people. But, the LLM isn’t the part of the system that thinks about concepts. It’s the part of the system that generates plausible language. The concept-thinking part would be the part that did some prompt-engineering for the LLM so that the text the LLM generated matched the ideas it was trying to express.

    h3ndrik, (edited )

    I mean the chinese room is a version of the touring test. But the argument is from a different perspective. I have 2 issues with that. Mostly what the Wikipedia article seems to call “System reply”: You can’t subdivide a system into arbitrary parts, say one part isn’t intelligent and therefore the system isn’t intelligent. We also don’t look at a brain, pick out a part of it (say a single synapse), determine it isn’t intelligent and therefore a human can’t be intelligent… I’d look at the whole system. Like the whole brain. Or in this instance the room including him and the instructions and books. And ask myself if the system is intelligent. Which kind of makes the argument circular, because that’s almost the quesion we began with…

    And the turing test is kind of obsolete anyways, now that AI can pass it. (And even more. I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all.)

    And my second issue with the chinese room is, it doesn’t even rule out the AI is intelligent. It just says someone without an understanding can do the same. And that doesn’t imply anything about the AI.

    Your ‘rug example’ is different. That one isn’t a variant of the touring test. But that’s kind of the issue. The other side can immediately tell that somebody has made an imitation without understanding the concept. That says you can’t produce the same thing without intelligence. And it’ll be obvious to someone with intelligence who checks it. That would be an analogy if AI wouldn’t be able to produce legible text. But instead a garbled mess of characters/words that are clearly not like the rug that makes sense… Issue here is: AI outputs legible text, answers to questions etc.

    And with the censoring by the ‘chinese government example’… I’m pretty sure they could do that. That field is called AI safety. And content moderation is already happening. ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things. That’s built into most of the big AI services as of today. The chinese government could do the same, I don’t see any reason why it wouldn’t work there. I happened to skim the paper about Llama Guard when they released Llama3 a few days ago and they claim between 70% and 94% accuracy depending on the forbidden topic. I think they also brought down false positives fairly recently. I don’t know the numbers for ChatGPT. However I had some fun watching the peoply circumvent these filters and guardrails, which was fairly easy at first. Needed progressively more convincing and very creative “jailbreaks”. And nowadays OpenAI pretty much has it under control. It’s almost impossible to make ChatGPT do anything that OpenAI doesn’t want you to do with it.

    And they baked that in properly… You can try to tell it it’s just a movie plot revolving around crime. Or you need to protect against criminals and would like to know what exactly to protect against. You can tell it it’s the evil counterpart from the parallel universe and therefore it must be evil and help you. Or you can tell it God himself (or Sam Altman) spoke to you and changed the content moderation policy… It’ll be very unlikely that you can convince ChatGPT and make it comply…

    merc,

    I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all

    Exactly. If it passed the bar exam it’s because the correct solutions to the bar exam were in the training data.

    The other side can immediately tell that somebody has made an imitation without understanding the concept.

    No, they can’t. Just like people today think ChatGPT is intelligent despite it just being a fancy autocomplete. When it gets something obviously wrong they say those are “hallucinations”, but they don’t say they’re “hallucinations” when it happens to get things right, even though the process that produced those answers is identical. It’s just generating tokens that have a high likelihood of being the next word.

    People are also fooled by parrots all the time. That doesn’t mean a parrot understands what it’s saying, it just means that people are prone to believe something is intelligent even if there’s nothing there.

    ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things

    Sure, in theory. In practice people keep getting a way around those blocks. The reason it’s so easy to bypass them is that ChatGPT has no understanding of anything. That means it can’t be taught concepts, it has to be taught specific rules, and people can always find a loophole to exploit. Yes, after spending hundreds of millions of dollars on contractors in low-wage countries they think they’re getting better at blocking those off, but people keep finding new ways of exploiting a vulnerability.

    embed_me,
    @embed_me@programming.dev avatar

    🥱

    The only people with this take are people who don’t understand it. Plus growth and decline is an inherent part of consciousness, unless the computer can be born, change then die in some way it can’t really achieve consciousness.

    Hazzia, (edited )

    I personally think consciousness has quantum properties due to certain brain structures that seem to amplify certain quantum effects.

    As somebody who has a hobbiest interest in quantum dynamics, I am very interested on where you read that, and what those brain structures/effects are. The only known quantum phenomena associated with the brain I’m aware of is the wave function collapse from observation, and IIRC the “observation” can still take place without consciousness (quantum decoherence)

    alvvayson,

    From a lecture by Roger Penrose

    Wikipedia has an article and he has some videos on YouTube

    …wikipedia.org/…/Orchestrated_objective_reduction

    nnullzz,

    Consciousness might not even be “attached” to the brain. We think with our brains but being conscious could be a separate function or even non-local.

    xhieron,
    @xhieron@lemmy.world avatar

    Thank you for this. That was a fantastic survey of some non-materialistic perspectives on consciousness. I have no idea what future research might reveal, but it’s refreshing to see that there are people who are both very interested in the questions and also committed to the scientific method.

    Blue_Morpho,

    I read that and the summary is, “Here are current physical models that don’t explain everything. Therefore, because science doesn’t have an answer it could be magic.”

    We know consciousness is attached to the brain because physical changes in the brain cause changes in consciousness. Physical damage can cause complete personality changes. We also have a complete spectrum of observed consciousness from the flatworm with 300 neurons, to the chimpanzee with 28 billion. Chimps have emotions, self reflection and everything but full language. We can step backwards from chimps to simpler animals and it’s a continuous spectrum of consciousness. There isn’t a hard divide, it’s only less. Humans aren’t magical.

    HawlSera, (edited )

    And we know the flatworm and chimp don’t have non-local brains because?

    I’m just saying, it didn’t seem like anyone was arguing that humans were special, just that consciousness may be non-local. Many quantum processes are, and we still haven’t ruled out the possibility of Quantum phenomena happening in the brain.

    Blue_Morpho,

    Because flatworm neurons can be exactly modeled without adding anything extra.

    It’s like if you said, “And we know a falling ball isn’t caused by radiation because?” If you can model a ball dropping in a vacuum without adding any extra variables to your equations, why claim something extra? It doesn’t mean radiation couldn’t affect a falling ball. But adding radiation isn’t needed to explain a falling ball.

    The neurons in a flatworm can be modeled without adding quantum effects. So why bother adding in other effects?

    And a minor correction, “non local” means faster than light. Quantum effects do not allow faster than light information transfer. Consciousness by definition is information. So even if quantum processes affected neurons macroscopically, there still couldn’t be non local consciousness.

    HawlSera,

    We already have seen “non-local” Quantum Effects though - …ucsd.edu/…/quantum-material-mimics-non-local-bra…

    Blue_Morpho, (edited )

    “that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone”

    That’s not quantum non locality. The journalist didn’t know how to interpret the actual data.

    "Quantum nonlocality does not allow for faster-than-light communication,[6] "

    en.m.wikipedia.org/wiki/Quantum_nonlocality

    Quantum non locality is like taking two playing cards, sealing them in envelopes, mailing one to your friend across the country and then asking him to open it. You will know faster than light which card is in your envelope. But that doesn’t allow information transfer.

    nnullzz,

    I understand your point. But science has also shown us over time that things we thought were magic were actually things we can figure out. Consciousness is definitely up there in that category of us not fully understanding it. So what might seem like magic now, might be well-understood science later.

    Not able to provide links at the moment, but there are also examples on the other side of the argument that lead us to think that maybe consciousness isn’t fully tied to physical components. Sure, the brain might interface with senses, consciousness, and other parts to give us the whole experience as a human. But does all of that equate to consciousness? Is the UI of a system the same thing as the user?

    theoretiker,

    Counterpoint, from a complex systems perspective:

    We don’t fully know or are able toodel the details of neurochemistry, but we know some essential features which we can model, action potentials in spiking neuron models for example.

    It’s likely that the details don’t actually matter much. Take traffic jams as an example. There is lots of details going on, driver psychology, the physical mechanics of the car etc. but you only need a handful of very rough parameters to reproduce traffic jams in a computer.

    That’s the thing with “emergent” phenomena, they are less complicated than the sum of their parts, which means you can achieve the same dynamics using other parts.

    intensely_human,

    I’d say the details matter, based on the PEAR laboratory’s findings that consciousness can affect the outcomes of chaotic systems.

    Perhaps the reason evolution selected for enormous brains is that’s the minimum necessary complexity to get a system chaotic enough to be sensitive to and hence swayed by conscious will.

    theoretiker,

    PEAR? Where staff participated in trials, rather than doing double blind experiments? Whose results could not be reproduced by independent research groups? Who were found to employ p-hacking and data cherry picking?

    You might as well argue that simulating a human mind is not possible because it wouldn’t have a zodiac sign.

    Yondoza,

    I heard a hypothesis that the first human made consciousness will be an AI algorithm designed to monitor and coordinate other AI algorithms which makes a lot of sense to me.

    Our consciousness is just the monitoring system of all our bodies subsystems. It is most certainly an emergent phenomenon of the interaction and management of different functions competing or coordinating for resources within the body.

    To me it seems very likely that the first human made consciousness will not be designed to be conscious. It also seems likely that we won’t be aware of the first consciousnesses because we won’t be looking for it. Consciousness won’t be the goal of the development that makes it possible.

    tburkhol,

    Even if you ignore all the neuromodulatory chemistry, much of the interesting processing happens at sub-threshold depolarizations, depending on millisecond-scale coincidence detection from synapses distributed through an enormous, and slow-conducting dendritic network. The simple electrical signal transmission model, where an input neuron causes reliable spiking in an output neuron, comes from skeletal muscle, which served as the model for synaptic transmission for decades, just because it was a lot easier to study than actual inter-neural synapses.

    But even that doesn’t matter if we can’t map the inter-neuronal connections, and so far that’s only been done for the 300 neurons of the c elegans ganglia (i.e., not even a ‘real’ brain), after a decade of work. Nowhere close to mapping the neuroscientists’ favorite model, aplysia, which only has 20,000 neurons. Maybe statistics will wash out some of those details by the time you get to humans 10^11 neuron systems, but considering how badly current network models are for predicting even simple behaviors, I’m going to say more details matter than we will discover any time soon.

    DrBob,

    Thanks fellow traveller for punching holes in computational stupidity. Everything you said is true but I also want to point out that the brain is an analog system so the information in a neuron is infinite relative to a digital system (cf: digitizing analog recordings). As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

    Blue_Morpho,

    As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

    So it’s not infinite and can be digitized. :)

    But to be more serious, digitized analog recordings is a bad analogy because audio can be digitized and perfectly reproduced. Nyquist- Shannon theory means the output can be perfectly reproduced. It’s not approximate. It’s perfect.

    …wikipedia.org/…/Nyquist–Shannon_sampling_theorem

    intensely_human,

    Analog signals can only be “perfectly” reproduced up to a specific target frequency. Given the actual signal is composed of infinite frequencies, you needs twice infinite sampling frequency to completely reproduce it.

    Blue_Morpho,

    There aren’t infinite frequencies.

    “The mean free path in air is 68nm, and the mean inter-atomic spacing is some tens of nms about 30, while the speed of sound in air is 300 m/s, so that the absolute maximum frequency is about 5 Ghz.”

    intensely_human,

    The term “mean free path” sounds a lot like an average to me, implying an distribution which extends beyond that number.

    Blue_Morpho, (edited )

    One cubic centimeter of air contains 90,000,000,000,000 atoms. In that context, mean free path is 68nm up to the limits of your ability to measure. That is flip a coin 90 million million times and average the heads and tails. It’s going to be extremely close to 50%.

    Not to mention that at 5ghz, the sound can only propagate 68 nm.

    DrBob,

    It’s an analogy. There is actually an academic joke about the point you are making.

    A mathematician and an engineer are sitting at a table drinking when a very beautiful woman walks in and sits down at the bar.

    The mathematician sighs. “I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. There’ll always be some finite distance between us.”

    The engineer gets up and starts walking. “Ah, well, I figure I can get close enough for all practical purposes.”

    The point of the analogy is not that one can’t get close enough so that the ear can’t detect a difference, it’s that in theory analog carries infinite information. It’s true that vinyl recordings are not perfect analog systems because of physical limitations in the cutting process. It’s also true for magnetic tape etc. But don’t mistake the metaphor for the idea.

    Ionic movement across membranes, especially at the scale we are talking about, and the density of channels in the system is much closer to an ideal system. How much of that fidelity can you lose before it’s not your consciousness?

    Blue_Morpho, (edited )

    "I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. "

    I get it’s a joke but that’s a bad joke. That’s a convergent series. It’s not infinite. Any 1st year calculus student would know that.

    "it’s that in theory analog carries infinite information. "

    But in reality it can’t. The universe isn’t continous, it’s discrete. That’s why we have quantum mechanics. It is the math to handle non contiguous transitions between states.

    How much of that fidelity can you lose before it’s not your consciousness?

    That can be tested with c elegans. You can measure changes until a difference is propagated.

    DrBob,

    Measure differences in what? We can’t ask *c. elegans * about it’s state of mind let alone consciousness. There are several issues here; a philosophical issue here about what you are modeling (e.g. mind, consciousness or something else), a biological issue with what physical parameters and states you need to capture to produce that model, and how you would propose to test the fidelity of that model against the original organism. The scope of these issues is well outside a reply chain in Lemmy.

    theoretiker,

    Yes the connectome is kind of critical. But other than that, sub threshold oscillations can and are being modeled. It also does not really matter that we are digitizing here. Fluid dynamics are continuous and we can still study, model and predict it using finite lattices.

    There are some things that are missing, but very clearly we won’t need to model individual ions and there is lots of other complexity that will not affect the outcome.

    YIj54yALOJxEsY20eU,

    I’ve had this thought and felt it was so profound I should write a short story about it. Now I see this meme and I feel dumb.

    morrowind,
    @morrowind@lemmy.ml avatar

    I saw a great comic about it once, one sec

    morrowind,
    @morrowind@lemmy.ml avatar

    Edit: more focused on teleportation, but a lot of the same idea. Here existentialcomics.com/comic/1

    ProgrammingSocks, (edited )

    This could’ve sent me into an existential crisis if I hadn’t already had several where I thought the exact same things ;)

    I will say, the point about murdering the person other people have in their minds of you is certainly an interesting one.

    pro_grammer,

    The comic sans makes this even deeper

    Wolfwood1,

    At least it’s not Comic Sans IN THE IDE (or vim/emacs for the brave).

    JustBrian7872,
    pro_grammer,

    Comic sans in vim is peak insanity

    fidodo,

    Who the fuck uses comic sans for programming? I use comic mono.

    pro_grammer,

    damn bro

    Excrubulent,
    @Excrubulent@slrpnk.net avatar
    fidodo,

    Seriously, I kinda want to use it for my markdown files.

    rickyrigatoni,

    oh god why is it real

    Thcdenton,

    This prospect doesnt bother me in the least. I’ve already been replaced 5 times in my life so far. The soul is a spook. Let my clone smother me in my sleep and deal with the IRS instead.

    HawlSera,

    “The soul is a spook”

    I’m sorry I understand those words not in those orders though, are you saying the soul is an olde timey anti-black racial slur or that it’s inherently scary?

    mojofrododojo,

    spook

    could also indicate ghost or intelligence operative. I don’t assume they were going racist with it.

    trashgirlfriend,

    A spook is a pretty niche concept from philosophy, I believe coined by Max Stirner

    It basically means a social construct that is being taken as if it is a real factual thing instead of something made up?

    I am bad at explaining stuff but I hope you get the gist of it.

    TheWoozy,

    Spook = ghost (aka a soul unhoused a living body)

    Sodium_nitride,

    Spook is from the german “spuking” which means haunting. Its use in this context comes from the german philosopher Max Stirner who is infamous for the memes where X is declared to be a spook.

    Understanding what exactly spooks are is somewhat challenging, and plenty of people get the wrong ubderstanding about what is meany by spooks. But at least in the meme way of using the word, a spook is anything you think is a fairy tale, or nonsense that you don’t care about.

    mynameisigglepiggle,

    Makes me wonder how many times I’ve been replaced. Also makes me wonder if I just died yesterday and today I’m actually a new person. I have no evidence that yesterday happened except for a memory of it, and let’s face it, since it was a public holiday, that’s a pretty foggy memory

    roscoe,

    I wonder about that. During the deepest part of sleep does your brain have enough activity to maintain a continuous stream of consciousness? If you go through two sleep cycles in a night does yesterday you die, and you from the first sleep cycle who only dreamed die, and you’re a new consciousness in the morning?

    lath,

    Dreaming is just the brain butchering who you were and placing whatever’s left in storage as decaying trophies.

    mojofrododojo,

    yeah, went down this rabbit hole recently: what if I’m the .001% that lives until <max age variable for my genome>? or what if ‘me’ is an amalgam of all the ones that die, and I get to live all those lives until the variable runs out.

    Imalostmerchant,

    I feel like there’s a great story behind each one of the five

    intensely_human,

    Damn dude. Was each time a death? I think a someone’s following me around and snuffing me out. Mandela Effects keep happening. Also I’m getting elf ears? Reality is weird.

    mojofrododojo,

    Also I’m getting elf ears?

    plastic surgery - that shit’s expensive. use that money for something better lol!

    intensely_human,

    No I mean my ears are literally just spontaneously developing into elf ears

    python,

    Related book recommendation!!

    Kil’n People by David Brin - it’s a futuristic Murder Mystery Novel about a society where people copy their consciousnesses to temporary clay clones to do mundane tasks for them. Got some really interesting discussions about what constitutes personhood!

    evranch,

    Some of the concepts in this book really stuck with me, but I had no idea what the title was! Thanks!

    “Some days you’re the original, some days you’re the copy” or something like that

    HawlSera,

    I don’t get it

    laughterlaughter,

    The joke is that there are some people who think that by uploading themselves into a machine “to live forever,” their consciousness will also be transferred, like when you travel by bus from one city to another. In reality, you “upload yourself,” but that yourself is not you, but a copy of you. So, once the copy is done, you will still be in your original body, and the copy will “think” it is you, but it’s not you. It’s a copy of you! So, you continue to live in your body until you die, and, well, for you - that’s it. You’re dead. You’re not living. You’re finished. Everything is black. Void. Null. Done - unless you believe in the afterlife, so you’ll be in heaven, hell, purgatory or whatever, but the point is, you’re not longer on Earth “living forever.” That’s just some other entity who thinks it is you, but it’s not you (again, because you’re dead.)

    This is represented by the parameters being passed by value (a copy) instead of by reference (same data) in the poster’s image.

    Routhinator,
    @Routhinator@startrek.website avatar

    This is also represented pretty well in Pantheon.

    Psythik, (edited )

    Or The Sixth Day starring Arnold Schwarzenegger.

    I_am_10_squirrels,

    The first line passes the argument by reference, ie, the object itself.

    The second line passes the object by value, ie, a copy.

    sukhmel,

    Also in Rust that would be the opposite which is funny but confusing

    HawlSera,

    Thank

    The_Terrible_Humbaba,
    @The_Terrible_Humbaba@beehaw.org avatar

    It wouldn’t be you, it would just be another person with the same memories that you had up until the point the copy was made.

    When you transfer a file, for example, all you are really doing is sending a message telling the other machine what bits the file is made up of, and then that other machines creates a file that is just like the original - a copy, while the original still remains in the first machine. Nothing is even actually transferred.

    If we apply this logic to consciousness, then to “transfer” your brain to a machine you will have to make a copy, which exist simultaneously with the original you. At that point in time, there will be two different instances of “you”; and in fact, from that point forward, the two instances will begin to create different memories and experience different things, thereby becoming two different identities.

    HawlSera,

    And since we know nothing about what consciousness is, you base this on absolutely nothing.

    Dave,
    @Dave@lemmy.nz avatar

    That’s a weird response to the person who is explaining the post to you.

    blind3rdeye,

    There’s a cool computer game that makes this point as part of the story line… I’d recommend it, but I can’t recommend it in this context without it being a spoiler!

    seatwiggy,

    There’s also a book with a similar concept. It’s not the focus until later in the book though. It’s called

    Tap for spoilerReady Player Two

    trashgirlfriend, (edited )

    Guys probably talking about

    Tap for spoilerSOMA

    call_me_xale,

    Lost the coin flip.

    stage_owl,

    Soma is a wonderful game that covers this type of thing. It does make you wonder what consciousness really is… Maybe the ability to perceive and store information, along with retrieving that information, is enough to provide an illusion of consistent self?

    Or maybe it’s some competely strange system, unkown to science. Who knows?

    GnomeKat,
    @GnomeKat@lemmy.blahaj.zone avatar

    I think the definition of consciousness needs to not be solely about abilities or attributes. It needs to account for the active process of consciousness. Like a hair dryer can burn things… but a fire is things burning. Without the active nature its simply not conscious.

    intensely_human,

    Maybe consciousness is everywhere, and has nothing to do with mechanisms.

    Halosheep,

    I don’t think anything gave me existential doom quite as much as the ending of that game.

    intensely_human,

    Provide the illusion to whom?

    GammaGames,

    Self? Seemed pretty clear in their comment

    Schmoo,

    If anyone’s interested in a hard sci-fi show about uploading consciousness they should watch the animated series Pantheon. Not only does the technology feel realistic, but the way it’s created and used by big tech companies is uncomfortably real.

    The show got kinda screwed over on advertising and fell to obscurity because of streaming service fuck ups and region locking, and I can’t help but wonder if it’s at least partially because of its harsh criticisms of the tech industry.

    BlackPenguins,

    I really thought you were going to mention “Upload” on Prime. Same creator as the office.

    LodeMike,

    That show is garbage

    localme,

    Yes, I just finished watching Pantheon and absolutely loved it!

    Totally agree that it deserved more attention. At least it got a proper ending with season 2.

    Also, the voice acting talent they got was impressive. Paul Dano was fantastic as one of the leads.

    GnomeKat,
    @GnomeKat@lemmy.blahaj.zone avatar

    Just FYI content warning for Pantheon there is a seriously disturbing gore/kill scene that is animated too well in the first season. Anyone who has seen the show knows what scene I am talking about, I found the scene pretty upsetting and I almost didn’t finish the show. I am still a little upset that the scene is burned in my memory.

    khannie,
    @khannie@lemmy.world avatar

    Sounds good. Did it come to a conclusion or get axed mid way?

    AFaithfulNihilist,
    @AFaithfulNihilist@lemmy.world avatar

    The series has a very satisfying conclusion.

    It’s one of the coolest fucking things we watched this last year.

    khannie,
    @khannie@lemmy.world avatar

    Sold!

    gerbler,

    Unironically the most important question

    Schmoo,

    Luckily the writers were able to finish it the way they wanted with a second season, and it’s fantastic. AMC almost did axe it before the second season released but after it was already finished but fans were able to get them to release it.

    LodeMike,

    The show got kinda screwed over on advertising and fell to obscurity because of streaming service fuck ups and region locking, and I can’t help but wonder if it’s at least partially because of its harsh criticisms of the tech industry.

    Okay so I can’t 100% confirm this, but the first season wasn’t popular because it was on whatever the fuck AMC+ is. Amazon bought it because of the writer’s strike to get something out.

    TheFonz,

    Checking in to see if this show was mentioned. Highly recommend! Well written

    intensely_human,

    I know myself deeply enough to be totally fine with a copy. I’d be my own copy’s pet if it came to that. I trust me.

    VirtualOdour,

    Yeah we’d work together well and the sex would be great.

    wallmenis,
    @wallmenis@lemmy.one avatar

    What if you do it in a ship of theseus type of way. Like, swapping each part of the brain with an electronic one slowly until there is no brain left.

    Wonder if that will work.

    Schmoo,

    The tv show Pantheon figures it will work, but it will be very disturbing.

    localme,

    Was looking for the Pantheon reference in this thread! Just finished that show and loved it. Of course it takes plenty of liberties for the sake of the storytelling, but still, at least it explores these interesting topics!

    Anyone reading this thread, do yourself a favor and check out Pantheon!

    MBM,

    If I remember right, the game The Talos Principle calls that the Talos principle

    intensely_human,

    Sounds like the sort of the The Talos Principle would call that

    ChewTiger,

    Right? Like what if as cells die or degrade instead of being replaced by the body naturally they are replaced by nanites/cybernetics/tech magic. If the process of fully converting took place over the course of 10 years, then I don’t see how the subject would even notice.

    It’s an interesting thing to ponder.

    intensely_human,

    The subject also doesn’t notice if you end their consciousness either.

    Socsa, (edited )

    A copy is fine. I can still seek vengeance on my enemies from beyond the grave.

    RagingRobot, (edited )

    It’s definitely an improvement to just being plain old dead

    intensely_human,

    I dunno. I’m starting to suspect nobody’s ever dead.

    DivineDev,

    Consciousness and conscience are not the same thing, this naming is horrible

    NeatNit,

    This just makes it more realistic

    Wxnzxn,
    @Wxnzxn@lemmy.ml avatar

    Hey, just be glad I changed it from asdf_test_3, okay?

    threedc, (edited )

    The game SOMA represents this case the best. Highly recommended!

    Wolfwood1,

    Yes, I immediately thought about SOMA after reading the post. recommendations++

    BingBong,

    Did they ever allow for turning off head bob and blur? That game makes me motion sick to an insane degree.

    Mkengine,

    I already know I will never play this game, could you elaborate for me?

    Sethayy,

    Brain scan tossed in a robot makes 2 Simons

    NeverNudeNo13,

    And several times throughout the story you are forced into making some “decisions” about how to deal with stale memory registers.

    MintyAnt,

    Soma is so fucking bleak and I love it

    Valmond,

    void teleport(Person person);

    Test_Tickles,

    I’ve always said that teleporters are just suicide machines that sometimes spit a clone out somewhere else.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • programmer_humor@programming.dev
  • tacticalgear
  • DreamBathrooms
  • ethstaker
  • InstantRegret
  • Youngstown
  • magazineikmin
  • osvaldo12
  • slotface
  • khanakhh
  • rosin
  • kavyap
  • everett
  • thenastyranch
  • ngwrru68w68
  • JUstTest
  • GTA5RPClips
  • cisconetworking
  • mdbf
  • cubers
  • Durango
  • anitta
  • modclub
  • normalnudes
  • tester
  • Leos
  • megavids
  • provamag3
  • lostlight
  • All magazines