Consciousness, Computers and the Chinese Room

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

A new video from my favorite elliot rodgers youtuber. What a treat

mastercontrol
Автор

Luke Smith DESTROYS Chinese Room misunderstanders with FACTS and LOGIC

qmillomadeit
Автор

Hope you are doing fine after that earthquake Luke.

Alekov_
Автор

Oh, I guess I misunderstood, I thought the Chinese room experiment was a creepy pasta title

cowoverthere
Автор

You mean we should be properly informed before fighting to the death for _our_ cause? That's not the modern way.

buggs
Автор

When i hear you say "gooeyness" I think "GUIness".

iLiokardo
Автор

IN ONE SENTENCE (paraphrasing Luke): If you have a machine that replicates the human brain, then the machine MIGHT be conscious, but it does not necessarily have to be, i.e. it's not a proof.

chrikrah
Автор

Luke, seriously, thank you for the channel
Aside for handy tech tips and entertainment value, I've started to think more and read more

altEFG
Автор

This is an interesting topic. Just in the last few days I was following the storyline of the game SOMA (don't intend to buy it, or any other game at this point, but it seems interesting). The story amounts to computers, AI, awareness, consciousness and sentience. Without ruining most of the plot, the story goes something like AI developing enough "think patterns" that could be considered awareness and consciousness and people "uploading" their brains into a digital program by scanning their brain patterns and basically becoming humans inside programs that run on robots. An interesting paradox happens when humans upload themselves: the Ship of Theseus. Who is the real individual and who is the clone? To solve the paradox, some smartass thought that at one point, both the flesh and the AI are in-sync, so in order for the AI to be the continuation of the flesh, the flesh must be killed and so the AI becomes the original, otherwise, the AI would only be a clone of the individual and they individually start developing differently because of their different experiences. The idea of uploading the brain was supposed to make people live forever in a "nirvana-like" VR, where all their meets are met. The plot follows some "unforeseen" developments however. My take on it is that, while robots and AI can develop rationality, what we call awareness and consciousness (ability to sense / experience external factors and take logical decision about things), AI can pretty much never develop _sentience._ Sentience, as I use the term, means the ability to _feel_ the experiences you live. It is a fact that humans aren't the most logical monkeys at times, so this aspect is really important when discussing AI, because AI will be a very logical "being" that just takes input as declared by the external factors, transform the data, then output a new data through pure logical thinking, but without the "bias" of the human experiment. Now, I'm not saying that I'm absolutely certain, 100%, that humans are the only sentient beings, possessing both consciousness and sentience, however, I'm saying that humans are the only ones _we know of_ that possess said characteristics. There may be other life forms "out there, " we may invent or genetically develop other life forms with the same characteristic, or even nature may mutate other life forms with the same characteristics that humans posses. But until then, humans are the only ones (we know of) and no piece of metal can replace or even come close to it.

MrBiky
Автор

can you start filming in the backseats ? no particular reason

johndoe-fuqr
Автор

The "Chinese" Room
2:36 "what happens is that there is a little SLIT" (slits for eyes)
I thought you went the wrong way there for a second

FeoRache
Автор

I was really gripped by this argument when I first became interested in philosophy of mind. Now I find more agreement with the responses to the Chinese Room which say that while the program itself doesn't "understand Chinese" as Searle says, the whole room would, at least in a functional sense. Obviously the hardware that implements this room's consciousness bears no resemblance to the brain's mechanisms, but that's sort of beside the point since that was precisely why Searle setup the thought experiment in the first place.
Whether or not a conscious computer has intentionality or qualia in the same sense that we do is in some sense irrelevant to the computational view since such a computer would just have different mechanisms performing the functional role played by those features of human consciousness. The whole point behind the computational theory is that it doesn't really matter whether or not the mechanisms are the same because we're defining consciousness in terms of WHAT IT DOES as opposed to HOW IT'S BUILT. So sure, it doesn't presently answer questions like how do colors enter the theater of mind but that's because that's a question about the mechanical operations of OUR consciousness. Instead, the computational approach is trying to ask what the theater of mind is DOING when it populates a color.

Logomachus
Автор

Let's define consciousness first (kek). Also, I'd highly recommend the book Surfing Uncertainty that seems to summarize the latest (as of 2017 or something) knowledge on how real nervous systems and neural networks work. It is kind of a /suffocating/ hard-to-read book, with lots of digressions and self-repeats, but it does bring its point across. The prediction processing machinery it describes is shown to explain many of the features of cognition, and I found it to be a pleasant model to have.

provod_
Автор

The funny thing about consciousness is that I can think "words" in my head that my brain must in someway "egress" and then at the same time have to "ingress" and parse syntactically. It seems like consciousness is the "in-between" whatever it is that "floats" between these two processes. At the same time, completely wordless ideas can form in your mind and you can act on them before you can express in words what they are, generally I think this is how we go about most our day.

MatthewSchultzSeriously
Автор

The problem is that we do not know, what really Mind or Consciousness is. For example, just because it appears that Mind is result of brain functions, doesn't necessarily mean it's true. On the contrary, it could be that brain (or some other bodily functions) is just instrument through which Mind manifests itself. I think the closest to defining something as conscious, is when that something is self-aware. It's very unlikely it is the case with any form of computer.

gwojcieszczuk
Автор

I've thought a lot about this, and I've come to the conclusion that the thing that experiences is different from the thing that thinks(the brain). Experience, which is the term I use for the thing that experiences in order to avoid confusion, has no output. If experience has no output, then there could be two possibilities. Either, experience is something fundamental, composed of no parts, or it is composed of parts and there is something blocking us from seeing those parts. If we could see and observe those parts, then they could be read as outputs. If it is composed of parts, the only way for us to not see those parts is for something to be blocking us from seeing those parts.
This argument can be applied generally to something called a purely internal function. There are also impure internal functions, like particles in quantum physics.

michaelmam
Автор

Feels good to hear someone say Philosophical Zombie in the modern age rather than "NPC" even if they mean the same thing.

_dser
Автор

I had to do an essay in university about the responses to the Chinese room experiment and their merit/demerits.
When I started writing the essay I was an adamant believer the John Searle's argument was stupid (as I too misunderstood it).
But finally one day it actually clicked and I understood what he was getting at and fully agreed with his point. I got so excited by the real argument that I wrote the essay
on how everyone had misunderstood it and how really it was a really good argument.


I got a terrible mark because in my excitement I had forgotten what the essay was actually supposed to be about, and I'd just gone on a tangent instead.
Don't regret it though as I probably wouldn't have ever understood the Chinese room properly without it.

boredinlecture
Автор

You aren't conscious, Luke.
Prove me wrong

alurma
Автор

Thanks for the book recommendation. If you have more, keep them coming.
Are you planning to explore more anthropology related things in a future video? Like the one you did on the shift to agriculture in pre-history and its effect on humans.

一郎-ei