Jolteon,

So, basically like going to sleep and waking up?

exocortex,

Glad that isn’t Rust code or the pass by value function wouldn’t be very nice.

Cornelius,

Borrow checker intensifies

RedditWanderer,

I get this reference

Matriks404,

What if every part of my body is replaced by computer part continously. At what point do I lose my consciousness?

I think this question is hard to answer because not everyone agrees what consciousness even is.

hperrin,

It wouldn’t really matter until you get to the brain. Very little of your body’s “processing” happens outside of your brain. Basically all of your consciousness is in there. There are some quick nerve paths that loop through your spine for things like moving your hand away when you touch a hot object, but that’s not really consciousness.

essteeyou,

Conscience?

xantoxis,

So, I’m curious.

What do you think happens in the infinite loop that “runs you” moment to moment? Passing the same instance of consciousness to itself, over and over?

Consciousness isn’t an instance. It isn’t static, it’s a constantly self-modifying waveform that remembers bits about its former self from moment to moment.

You can upload it without destroying the original if you can find a way for it to meaningfully interact with processing architecture and media that are digital in nature; and if you can do that without shutting you off. Here’s the kinky part: We can already do this. You can make a device that takes a brain signal and stimulates a remote device; and you can stimulate a brain with a digital signal. Set it up for feedback in a manner similar to the ongoing continuous feedback of our neural structures and you have now extended yourself into a digital device in a meaningful way.

Then you just keep adding to that architecture gradually, and gradually peeling away redundant bits of the original brain hardware, until most or all of you is being kept alive in the digital device instead of the meat body. To you, it’s continuous and it’s still you on the other end. Tada, consciousness uploaded.

Daxtron2,

Well yeah, if you passed a reference then once the original is destroyed it would be null. The real trick is to make a copy and destroy the original reference at the same time, that way it never knows it wasn’t the original.

nickwitha_k,

I dunno. I could be quite happy having brain children or as a copy of a consciousness at a given point in time.

dwemthy,

I want Transmetropolitan style burning my body to create the energy to boot up the nanobot swarm that my consciousness was just uploaded to

nialv7,

I think you mean std::move

Daxtron2,

get your std away from me sir

Clent,

It would be easier to record than upload. Since upload requires at least a decode steps. Given the fleeting nature of existence how does one confirm the decoding? This also requires we create a simulated brain, which seems more difficult and resource intensive than forming a new biological brain remotely connected to your nervous system inputs.

Recording all inputs in real time and play them back across a blank nervous system will create an active copy. The inputs can be saved so they can be played back later in case of clone failure. As long as the inputs are record until the moment of death, the copy will be you minus the death so you wouldn’t be aware you’re a copy. Attach it to fresh body and off you go.

Failure mode would take your literal lifetime to reform your consciousness but what’s a couple decades to an immortal.

We already have the program to create new brains. It’s in our DNA. A true senior developer knows better than to try and replicate black box code that’s been executing fine. We don’t even understand consciousness enough to pretend we’re going to add new features so why waste the effort creating a parallel system of a black box.

Scheduled reboots of a black box system is common practice. Why pretend we’re capable of skipping steps.

mihor,
@mihor@lemmy.ml avatar

More like

sudo mv consciousness.md /dev/null

Schmoo,

If anyone’s interested in a hard sci-fi show about uploading consciousness they should watch the animated series Pantheon. Not only does the technology feel realistic, but the way it’s created and used by big tech companies is uncomfortably real.

The show got kinda screwed over on advertising and fell to obscurity because of streaming service fuck ups and region locking, and I can’t help but wonder if it’s at least partially because of its harsh criticisms of the tech industry.

i_ben_fine,

Upload is also good.

Nobody, (edited )

You see, with Effective Altruism, we’ll destroy the world around us to serve a small cadre of ubermensch tech bros, who will then somehow in the next few centuries go into space and put supercomputers on other planets that run simulations of people. You might actually be in one of those simulations right now, so be grateful.

We are very smart and not just reckless, over-indulged douchebags who jerk off to the smell of our own farts.

blackstampede,

Do you really think that’s what effective altruists want?

Nobody,

Yes

SturgiesYrFase,
@SturgiesYrFase@lemmy.ml avatar

You really gotta look at their actions. What they say they want and what they show us they want are clearly two different things.

blackstampede,

Do you think that your assertion, that they want to destroy the world around us in order to provide “value” to a small group of tech bros is at odds with the underlying philosophy of effective altruism? It seems like anyone who wanted to create the most good for the most people would be opposed to a future like that.

F04118F,

Sure, some probably do. And you can be sceptical and discuss why that’s a dangerous and undemocratic direction. Effective Altruism is a question, not an answer. In thr community, asking for and being open to critical feedback is encouraged as the main tenet of good culture.

But if you look at the amounts, most EAs donate most to helping the poorest people alive today. Because it is so obviously good, and proven to work with high certainty.

If you are interested in learning more about Effective Altruism, check out effectivealtruism.org/…/introduction-to-effective…

Source for distribution of donations: 80000hours.org/…/effective-altruism-allocation-re…

EmoDuck,

The Closest-Continuer schema is a theory of identity according to which identity through time is a function of appropriate weighted dimensions. A at time 1 and B at time 2 are the same just in case B is the closest continuer of A, according to a metric determined by continuity of the appropriate weighted dimensions.

Lonk

I don’t think that I fully agree with it but it’s interesting to think about

electricprism,

There are many languages I would rather die than be written in

xilliah,

<span style="color:#323232;">public static Consciousness Instance;
</span>
alphapuggle,

A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy

  • All
  • Subscribed
  • Moderated
  • Favorites
  • programmerhumor@lemmy.ml
  • DreamBathrooms
  • magazineikmin
  • modclub
  • Durango
  • Youngstown
  • rosin
  • khanakhh
  • slotface
  • ngwrru68w68
  • mdbf
  • thenastyranch
  • kavyap
  • InstantRegret
  • tester
  • JUstTest
  • everett
  • normalnudes
  • GTA5RPClips
  • osvaldo12
  • ethstaker
  • cisconetworking
  • tacticalgear
  • anitta
  • provamag3
  • cubers
  • Leos
  • megavids
  • lostlight
  • All magazines