Could You Upload Your Mind Into A Computer? | Philosophy Tube

preview_player
Показать описание
Biology meets computer science meets philosophy! Following a discussion between Antonio Damasio and Aubrey de Grey I was inspired to talk about minds and brains, computers, artificial intelligence, and technology!

Twitter: @PhilosophyTube

Recommended Reading:
Krisstin Thórisson, Reductio ad Absurdum: On Oversimplification in Computer Science and its Pernicious Effect on artificial Intelligence Research

If you or your organisation would like to financially support Philosophy Tube in distributing philosophical knowledge to those who might not otherwise have access to it in exchange for credits on the show, please get in touch!

Any copyrighted material should fall under fair use for educational purposes or commentary, but if you are a copyright holder and believe your material has been used unfairly please get in touch with us and we will be happy to discuss it.
Рекомендации по теме
Комментарии
Автор

As someone with a CS background I'd like to comment on your argument of computers being binary machines and thus being unable to replicate the processes inside a brain. As you say, the electronic and chemical processes in the brain operate more an a spectrum.
While it is true, that at a very fundamental level computers work in binary, there is nothing that prevents them from simulating continuous states. In fact, a lot of chemical problems are routinely calculated on computers. I don't see, why a computer should be unable to simulate the chemical processes inside a human neuron, while being perfectly capable of simulating the chemical processes inside, for example a fuel refinery.

Gnoccy
Автор

Computer science PhD with some background in neuroscience here. I'll try not to be as "you're missing some basic points here" as you self-reportedly were after that conversation but I can't promise anything because I have to admit, I found this video somewhat aggravating, so wall of text incoming.

First of all, the notion that computers only have two states.... where does this even come from? Of course most current computers use bits as a basic unit but that doesn't mean that you can't have a basically arbitrary number of states and arbitrarily approximate any real number. Sure, in an analog system you don't _approximate_ numbers but you're dealing with noise and imprecisions that come down to the same result.

So if you're describing a system, pick the precision and range of values you want or the numbers of states it should be able to take and you'll be able to do that in a computer. Use a bit more and you have double the number of states, double the precision. Heck, why not throw in a few more bits, just to be safe. Bottom line: uploading a human mind into a computer will create 99 million problems but bits ain't one.

On the "algorithms" side it appears to me – and again, sorry if I'm sounding harsh – that your understanding of how computers work and are being used on a higher level lags behind the state of the art by several decades. "Programming languages", as you mentioned, is not the level of abstraction that's relevant here and it's actually actively unhelpful to think about these things on that level. If we simulated a mind, nobody would sit down to write "if neuron 8926319 fires and neuron 193719 has activation >193.82903, then move hand" or whatever. I think everybody understood decades ago that if we ever want to create an artificial mind, symbolic thinking isn't going to get us there (although hybrid symbolic-subsymbolic systems are promising in certain regards).

The whole time thing... how does this even come into play here? If we can simulate a mind in that it shows the same (simulated) neural activity as the real thing to the same stimuli or even weaker, if it reacts to the same inputs with the same output, then this includes anything someone would say about time. If, however, this simulation cannot succeed, then I don't think the result would be something that can chat happily about the weather but the moment you ask it how many watts it had for lunch the other day, it breaks down in sparks.

Now all this being said, I don't know if you can upload your mind into a computer. Nobody does. Nobody knows how consciousness comes about, nobody has yet solved age-old problems like the Chinese Room and we just don't know if accurately simulating the entirety of neural processes in a human brain would simply fail for some reason, if it would lead to a philosophical zombie or if it would just result in Betty from across the street.

Personally, I already can't fathom that some neurons opening and closing ion channels leads to consciousness. So I also cannot fathom how transistors switching on and off could do the same. If someone described the whole concept to me and I didn't already know that consciousness was a thing, I'd a) not understand what they were even trying to tell me and b) probably think they'd gone mad or fallen prey to some esoteric guru. But presented with the undeniable fact that consciousness is a thing, I in turn can't imagine why simulating a brain _couldn't_ work and why it _shouldn't_ lead to a conscious mind. So at the end of the day, I _feel_ pretty certain that a simulation of brain processes that is faithful enough (whatever that means) leads not only to the same behavior but also to the same internal states and emergent phenomena (i.e. consciousness, qualia... all that). OTOH, I find it much easier to think that transistors could do that than a couple blokes sorting through Chinese characters, even though I'm well aware that these two notions are incompatible.

So there, so many words that come down to an elaborate ¯\_(ツ)_/¯. Point being though, please talk to a computer scientist before you do such a video again! I didn't even go into the theory of computation here or details about artificial neural networks or analog computers. Computer scientists, contrary to popular believe, do deserve the "scientist" part in their name. Don't ignore their work, please!

unvergebeneid
Автор

9:30 I'm crying a bit bc I'm a visitor from the future and the lines "The issue isn't whether I remain identical through every transition, but whether I survive, " and "In a lot of ways, I'm not the same person as I was [...] and yet, I have in a meaningful way *survived*" just hit different

MG-xpfr
Автор

I've been crying over the mars rover for two days and ranting about how we should go fix it because it did so much more for us then it had to. It had a 90 day mission and it worked for 15 years! I think I'm already treating machines as moral moral agents.

yearsago
Автор

I have quite a bit to say about a few points in the video, but honestly most of it is based on "Gödel, Escher, Bach" by Douglas Hofstadter, which tackles among other things the idea of thinking machines and thinking in general.

The basic premise of the book regarding intelligence is that we can probably simulate a brain given enough processing power because we can simulate a neuron - the argument that it doesn't work like a computer's hardware is irrelevant if we can simulate it through software, and we can. The problem is the huge amounts of neurons we'll have to simulate, so the question then becomes: "is there a simpler representation of the mind?", or "can it be more efficiently simulated?"

Hofstadter believes that the answer is yes, and the model he suggests is that of 'symbols'. Basically symbols are attached to everything we perceive (physical items, people, abstract concepts, imagined constructs, every distinct "thing") and interact in certain ways to produce "intelligence" (that's the comment version of it, the book explains it much more thoroughly).
It's important to note that in Hofstadter's model of a thinking machine, the machine won't necessarily be good at split second decisions, math, consistency or any other trait usually attributed to computers. The complex interplay of symbols will produce chaotic results, which we may recognise as improvisation, uncertainty, forgetfulness and even emotional responses or inspiration. We may be able to widen the computerised brain capabilities, but on it's own it won't necessarily be any different than a human's. It will simply be a non-organically made version of it.


That's on the subject of mechanised intelligence, at least. The question of transferring consciousness is a whole other best.

aviasegel
Автор

Olly, you have created such a beautiful warm space on this channel. More then just providing education, you give us a little slice of the internet that feels like home. Thank you and much love. <3

Theo_Caro
Автор

If a human brain follows algorithms, the type of algorithm that Dreyfus is thinking of is far too simple. He actually needs to demonstrate that there is no algorithm whatsoever that governs human behaviour, even one that takes into account the state of every single neuron (and, according to embodied consciousness, every cell in the body, perhaps).

What Dreyfus seems to be saying is "humans don't follow these extremely simplified algorithms, therefore they don't follow any algorithms". That seems to be a non-sequitur.

Also, Thorinsson has not shown that computers cannot possibly understand unlimited time and energy, just that they typically do not at the moment. Likewise, Morozov has only shown that computers currently only recall and cannot remember, not that computers never can. Again, these are non-sequiturs. They may successfully argue that the computers of today do not have intelligence or consciousness, but this does not mean that it never will.

mathymathymathy
Автор

i'm always really judgy with people who hit, yell at their computers. like shhhh, you'll hurt her feelings

malavery
Автор

7:57 Such a good point. One of my earliest memories is actually a straight up false memory. I was around 4 years old and having a casual footrace with another boy in his backyard. While running, I tripped and fell down, I remember the visual of watching him continue to run from while I was on the ground looking up and the feelings I had at that exact moment.

But the reality is that _I_ was the one who continued to run and _he_ was the boy that tripped and fell down. My older sister was babysitting us and had by some weird coincidence of chance snapped a picture when this happened. When I found that photo while going through a stack of old photos at my parents' house a few years ago, it blew my mind. At some point I must have swapped the real memory with the false memory of what was actually just some empathic projection I had of the event, then continued on with my life none the wiser that I had done that. And while I remember the suffering and embarrassment of falling down and watching my friend beat me in that footrace, our roles in the picture were reversed, with the most ironic part of it all - a look of total glee on my face.

SoSoMikaela
Автор

A group of scientists mapped the neural network of a worm, essentially “took a picture of it”, ran it in a computer simulation and allowed that simulation control of a LEGO robot body. With no instruction or alteration, it immediately began doing worm things, proving that neural networks can indeed operate within computers. As someone else in the comments has already noted, your picture analogy is something of a strawman, unintentionally I’m sure, because the information stored in an image is quite different from the form and function of the information recorded from such a detailed brain map. My computer can, within itself, run another computer on a different operating system, and it does this by a similar process to what’s being discussed. Just thought I’d throw my two cents in there

wilezekiel
Автор

The picture metaphor regarding this topic is a strawman. Taking a picture is an example of a copy process which actually is the thing attacked here. In this case, a very lossy, superficial, and partial copy process which is obviously insufficient to capture a human mind. But let us explore that idea a bit. What about taking a picture of a picture. Or since taking a picture is really just a copy process, let me ask about taking a digital copy of a digital image/file. They are now both equivalent. You cannot say that the digital copy is somehow less worth or not the same thing since it is a perfect copy. I, therefore, argue that if you could provide an appropriate copy process you could create a copy of a brain and "run" it and it would be the same in every regard.

Is the neural dust idea an appropriate copy mechanism? We don't know at this point. But the argument of embodiment is also flawed in my opinion. Since you are supposed to use the copy in the same way as you use the original. The digital copy of an image is obviously would not work if you decide to always open it in excel instead of an image viewer. So if you consider your mind as processing inputs and an internal state to outputs then you would have to also provide the same inputs to the copy. If this means that you require bodily sensory inputs then you must also provide such sensory input to the copy. Personally, I think it is only required for the development of the mind but not anymore once it is fully developed. There are many arguments for that but for one just think e.g. about paralyzed people...

aBigBadWolf
Автор

Had to brace myself before watching this video. As a thanatophobic, techno-optimistic, transhuman, "luxury gay space communism" rosy-goggles dreamer, these are exactly the sorts of ideas and discussions which my brain desperately wants to avoid, and with which I try hard to consistently engage anyway. I think the big thing that gets missed in a lot of these conversations is that even IF we could make a sufficiently accurate clone of any given brain, that doesn't mean that an "upload" would actually keep our consciousness intact. It's the teleporter problem; how could we be sure that that new entity is actually me, instead of an identical person having its first real experiences at that moment? As you noted, something being "identical" is not necessarily tied to something being "actually the same person". The only way I would feel comfy with anything like this is through some sort of Theseus Ship thing, very SLOWLY trading out organic brain bits for artificial constructs that integrate into the whole.

Cuix
Автор

I feel that this discussion is taking place with little to no knowledge on how a computer genuinely works.

Snaog
Автор

1.So in neural network algorithm currently used in computer science, when a neuron fires it's not just binary.It takes its input and applies an activation function which chooses if it should fire. The most popular activation function is the sigmoid function which takes the inputs and outputs a number with a usual range around 1 to 0. The sigmoid function when plotted is a continuous(every different input has a different output) so it is like a spectrum. The output can't be infinitely precise of course.
2. As for the improvising bus, the driver thought experiment, you've got to ask if he's even improvising or just following a set of instructions for when certain inputs are given. The improvisation may not be 100% correct (no human or neural net algorithm is 100% correct).
3. An algorithm's model of the brain doesn't have to be 100% like a human brain because animals have completely different brains and every human's brain is different as well.
4. The algorithm doesn't keep track of time but it does try to optimise human values e.g. survival, safety, belonging and if it misses a deadline it doesn't get that reward so it learns to do it quicker.

This line of thought led me to believe in determinism.

michaelmccoubrey
Автор

Game designer with a philosophy degree who went from video games to tabletop games here. This is right up my alley so I'm going to reply to these points with my own insights as I watch.

1. Picture of Brain is not Brain
Yeah, basically that. I agree completely. I'd even go farther than that, though. The image of the brain is not only NOT a brain, but it might not even be the ONLY possible image of the brain. This makes me think of pictures you might see from NASA of nebula or galaxies. In them, you get a huge amount of the electromagnetic spectrum condensed down into the visible light spectrum. So xrays might be one color, gamma rays another, and infrared and ultraviolet tweaked just a little so we can see it with our eyes. These images come to the public with the declaration, "This is how the universe ACTUALLY looks!" This is 100% bullshit. That's a really pretty way to look at the universe, but it's not the "way it looks". You could do the same thing with any image from Earth and get something totally alien out of it, but NASA isn't interested in presenting the "real" Earth that way. It's the same with the brain. You can get an image of it; a way of looking at it, but that isn't going to be the TRUE VISION of it. Just one perspective. A very very far cry from the real thing indeed.

2. Embodied Conciousness
I agree with this point as well. There's an NPC in Nier: Automata who highlighted this really well. He's an android who sells spare parts to other androids, but he has a lame leg. When you ask him why he doesn't repair his own leg with the parts he sells he brushes you off with some reason about the war effort. When you push him further he admits that his left leg is the last original part of his body. Everything else in his body has been replaced with other parts over time and he's afraid that if he replaces his leg then there will be nothing left of "him". There's a similar metaphor about a ship taught in philosophy classes, but I like this version better because it really highlights the idea of personhood even more directly. We aren't just brains in a jar or a disconnected consciousness. Our physicality is linked to our "self" and if you change (or annihilate) that then what happens to the self?

3. Pre-Determined (or coded) Conciousness
Here I don't agree, but I can understand why you would come to this conclusion, Olly. No functional version of a digital person is going to be structured as you described because that's not anything like how we are structured. So instead we need to look towards computer science theory that more emulates our psychology and I think machine learning is the right framework for that approach. The problem with programs that can alter their coding, which is necessary for machine learning, is that the resulting code is incomprehensible to humans. I think we should all take a moment to let that sink in. Machine that self program create better software than humans, but it is completely incomprehensible. A more physical representation of this would be the Amazon distribution centers, which are heavily automated. Inside these warehouses are drones storing goods in seemingly random positions throughout the warehouse that has no apparent reason to it, but it's more efficient than what humans come up with. So either our own machinations are incomprehensible (which I think is perfectly credible) or machines are wholly alien from us (which I think is equally likely). I don't think we can say that automated software is necessarily predictable as a result of this, which is where you get the (justified) fears about artificial intelligence.

4. Flies vs. Computers
This is really interesting and I can't help but wonder if it's a result more of enlightenment thought (which mechanized - or made dead) the world than how we treat entities in a more fundamental way. That is; I wonder if we're just taught, by culture, to see the world as being dead, and then end up with anomalies (like flies and spiders) within that framework that we don't exactly know where to place. Perhaps a more animist (or similar) view of the world, as imbued with a sort of spirit or purpose, would encourage people to treat objects (like computers) more like beings. That seems to be how objects in very old writings were treated - at least more often than it is today.

5. Final Thoughts and Language Games
Maybe we should just upload some people and see if we can learn from that? In chapter 2 of "Philosophy and the Mirror of Nature" Richard Rorty performs a thought experiment about consciousness. He presents two species; humans and aliens (whom he calls antipodeans) and gives some suggestions as to how these beings might set out to prove that the other is conscious. The full thought experiment is fairly long, so I'll summarize that Rorty believes that there is simply no solution because the question isn't one that can be answered through observation and testing. The only thing you can do is step back and look at it linguistically to see what is actually going on. Though language games we created a situation for which there is no solution. There might be a comparison here with the no solution problems in mathematics that led Turing to make machines to test them. This might be a case where we cannot move forward unless we first take a step backwards.

That became long! Thanks for reading, I hope you find something in there useful to consider.

ShaedeReshka
Автор

This made me think of the game Soma, which touches upon metaphysical and ethical questions surrounding making copies of oneself. For me, it's less important to speculate, if we can reach this level of technology, but more important to discuss, what we'd do with it, as *it's about copying* and not transferring consciousness. Meaning: Do we kill the original after the copy is made? If not: How do we deal with two identical minds/people laying claim to one life and one identity? Do "originals" get that privilege, and do we demote "copies" to 2nd-class humans? Do we give "copies" a self-destruct option? Do "copies" get citizenship and votes etc.?

augustaseptemberova
Автор

when referring to the computer thing. The binary analogy is still... basically the same thing. The binary switches, just like neurons, only turn on or off after a threshold has been reached, so in the same sense it's like neurons. Unless his point was there were more firing states, in which case that's the first I've ever heard of it.
Also, the programs and algorithm argument is the oldest trick in the book. And honestly outdated. AI can easily be made to take time and energy into account. We don't apply it BECAUSE it'd make them less efficient, but if we added the two things you could easily increase the "realistic intelligence" mentioned prior. Even though I don't understand why that's necessary. And nowadays programs aren't fixed, machine learning has become a huge thing, you only need to give it a few objectives to strive for and it can flexibly change from one position to another.
Similarly, a computer could easily be made to convert a memory that's recorded upon recall, then replacing the memory recorded with the new memory recalled, if you consider the google dreamification AI, it could apply something along those lines and it would eventually become like our memories, less and less accurate each time they're recalled, and affected by the current understandings.
This is a topic still stuck in the past, and most people only makes claims of the impossibility of it from intuition. There's been many more advances in computer science and neurology than just the ones described. Self driving cars are made to do one thing, drive from one place to another the best way they can, but how many people have made computer learning machines who's primary motive is to survive while providing itself with fuel while also having sensors to give it information about dangers and the exterior world?
Finally, COULD we upload our minds into a computer? Obviously it'd be extremely difficult, and I doubt it's function the same way unless we were instead, simulating each and every neuron and their firing patterns. But since there's no body, no sensors, no nothing that's usually necessary input, it may degrade with time into something weird and otherworldly. It just would need the necessary inputs similar to what a body may perceive first.

RPGgrenade
Автор

Great video! As a computers scientist, I have some thoughts on the matter:

You pointed out that computers are binary and the brain might be not, but this is mere a resolution problem. See, for a processor to simulate the output of a neuron with more than two states, it would be just a matter of using more bits to describe it (2 bits can describe 4 different states, 3 describe 8 and so on). Today, computers use words of 64 bits, that's a lot of numbers. The problem of the uploaded brain is also a problem of the Turing Equivalence of the brain - are the brain inner workings computational or not?

For Turing, computers are machines that operate using sets of symbols through a number of defined rules (algorithms) on a memory space. So if you can describe how to solve a problem like this, it can be run on a computer.

Artificial intelligence and a lot of natural world simulations operate in a fringe area, where we don't know if the problems are computationally solvable, their answers are deterministic, if they take thousands of years to be solved, they don't have enough information to be solved, or they are just poorly written. We are not entirely sure these programs can be written as understandable, testable and deterministic algorithms and sometimes they can be unpredictable and bring unforeseen innovations that the programmer did not think beforehand. Big Data is using some algorithms to find new or hidden patterns in huge collections of data, so it is possible to say that computers can be creative.

If the brain can be simulated using only computational techniques, for example, if we could describe it as a neural network with billions of neurons added to a long term memory mechanism, and all these mechanisms were computational, then we should be able to make digital copies of our brains. But these copies might be fairly different from the organic brains, for example, our computers are usually von Neumann machines, an architecture that may not be optimized to run the highly parallelized brain. And the resolution may be a problem. If we end up rounding some zeroes in the process, how different will be this digital personality?

But, as you pointed out, intelligence probably extends beyond the brain, and these copies may deviate from the original because of their bodies (whatever it may be), or just by their functioning be dictated by algorithms susceptible to a myriad of electronic problems that the organic brain is not.

mzklucas
Автор

I recently read Gödel, Escher, Bach by Douglas Hofstadter, which is a hefty tome, and not necessarily the most current (published 1979) but it presented some really interesting ideas about artificial intelligence and mapping brains into a sufficiently complex computer, and related that to number theory and the fundamental incompleteness of any formal system, programming language, and likely thought process- it was a really interesting read, and this video reminded me of it.

belgaer
Автор

All models of computation are interchangeable. Turing machines, lambda calculus, neural networks, and modern computers can all be arranged in such a way that they will simulate the others. So the base physical nature of computers isn't theoretically a problem here. "ones and zeros" can represent anything that's computational... theoretically. Of course, there will be an issue of "resolution"

upchuckles