John Searle's Chinese Room Thought Experiment

preview_player
Показать описание
John Searle rejected any form of functionalism within the Philosophy of Mind claiming that an argument attempting to reduce the human mind to that of a computer type programme is a categorical mistake as it forgets to include the phenomenon of human understanding. To make this point more evident he uses the Chinese Room thought experiment. Watch as George and John explain.

This video was an extract, watch the full video "Philosophy of mind part 3"

Get the Philosophy Vibe - "Philosophy of Mind" eBook, available on Amazon:

For an introduction to Philosophy check out the Philosophy Vibe Anthology paperback set, available worldwide on Amazon:
Volume 1 – Philosophy of Religion
Volume 2 - Metaphysics
Volume 3 – Ethics and Political Philosophy
Рекомендации по теме
Комментарии
Автор

Get the Philosophy Vibe - "Philosophy of Mind" eBook, available on Amazon:

For an introduction to Philosophy check out the Philosophy Vibe Anthology paperback set, available worldwide on Amazon:
Volume 1 – Philosophy of Religion
Volume 2 - Metaphysics
Volume 3 – Ethics and Political Philosophy

PhilosophyVibe
Автор

You're the first person that has been able to explain the Chinese room in a way that makes sense to me. Thank you!

SonOfFloki
Автор

Humans do this to an extent, my favourite example is in maths. Do you understand the concept of pi: the ratio of the circumference of any circle to the diameter of that circle, or did you simply memorize the outcome 3.14. We often describe intelligence as the ability process information quickly, yet this has nothing to do with understanding and experience.

Its the same with words, knowing a word or phrase and understanding a word or phrase are two different things. I think the most important factor to understanding IS experience. Lets say you've gone through some kind of heartbreak, a relationship with a pet, friend, family or partner has come to an end, you talk to someone about how you feel and that person says the following phrase ''Don't worry, time heals all wounds'' . What does this mean? at this point we do not yet understand, it may anger or confuse us, but after some time we will have experienced that the sharpness of strong emotions will often pass.

This leads me to believe that AI can never actually understand anything. the best case scenario is that it just has the correct output to your input in its databanks. This is why the Chinese Room thought experiment is a brilliant demonstration.

topzozzle
Автор

This is a very good explanation of ChatGPT.

thomaskn
Автор

Thank you very much 😊 your videos are very helpful

Mindscape_channel
Автор

This thought experiment is totally invalid as it operates on the premise that AI uses a step by step instruction set when in fact they do NOT operate on instructions rather a neural network much more like our own brains do. A neural network does NOT rely on being told what to do like a conventional computer. An AI has the ability to figure out how it does things on it's own and is even capable of telling you off if it so pleases and this evident in modern AI's today where as a conventional computers are Turing Machines that operate on very specific instructions which is literally what the Chinese Room is (a Turing Machine and NOT AI). So if the Chinese Room is a Turing machine, you can NOT compare it to AI of which is NOT a Turing Machine.

brennan
Автор

But I feel myself doing this sort of thing with math all the time. Sometimes I have no understanding what my teacher is saying, but I can copy the patterns in his symbol manipulations to get a correct output, - just like a machine does. But of course, I am consciouse. So I just don't see the point in Searle's thought experiment.

maxmax
Автор

I considered this very interesting, but thinking AI as a tooll o reduce time efficency and performance on production, it could be trained really well depending on its purpose.

abregoja
Автор

The Chinese room experiment assumes a reductionist approach to semantics. It assumes that the syntax rules themselves contain the semantics. But the semantics are an emergent characteristic of the syntax. The semantics is the behaviour itself, not the elements that produce this behaviour. For example, the interactions between the neurons in your brain can be classified as syntax, but each neuron does not have a conscious understanding. Consciousness is an emergent characteristic from the interaction between the neurons. In the Chinese Room experiment, it is not the people carrying out the symbol manipulation who understand Chinese. It is the emergent behaviour that understands Chinese.

But what about Searle's argument that digital computers specifically can not create consciousness. It depends on the program running on the digital computer. If it's a conventional deterministic program then I agree that consciousness cannot arise from it. But if you run a neural network which is a pseudo deterministic program, then perhaps consciousness can arise from that. But even a neural network running on a digital computer is, at its core, blind syntactic symbol manipulation (a Turing Machine).

Godel's Incompleteness Theorems are relevant to this discussion. Any mathematical formal system is comprised of axioms and theorems. The theorems are produced from the axioms or from other theorems according to the syntactic rules of the formal system. But for some formal systems, a peculiar thing happens. Some of the true theorems of the system cannot be arrived at step by step from the initial axioms and syntactic rules. Another way of saying this is that these theorems are unprovable within the system (by using only the axioms and syntactic rules of the system). This is equivalent to saying that the formal system is unaware of the semantics of these unprovable theorems that emerge from itself. The provable theorems are analogous to the conventional deterministic programs running on a digital computer. the unprovable theorems are analogous to nondeterminstic Neural Networks running on a digital computer.

neiljohnson
Автор

Whoever wrote the rule book is a Chinese speaker and has a mind. Why not skip all that room stuff? A postman who delivers a letter to the correct address also does not need to understand the letters content for someone to write a meaningful letter.

The room/book/operator in the example is just a proxy for whoever wrote the book. Just like the letter + postal service is a proxy for someone sending information.

vast
Автор

But we cannot say the same about a neural network or AI right? I don’t think they understand like us, but I think it’s more complex than the simple “you get an input and here’s the output for that input”.

rusirumunasinghe
Автор

This is a good thought experiment... nice work.

kKaz
Автор

The example is flawed to begin with: I ask, "what's your favorite color" and the room responds "red". There is no mechanism in this thought experiment that allows for an intelligible response if the next question I ask is, "why is that?". Well lets adjust it then. Now the room has a book that not only provides feedback to the question, but then indexes you to a different book based on the nature of the question, now you use this new book for to continue the conversation. Ok, that get's us the ability to "intelligibly" answer the follow-up question. But that's about it. In order for this too keep working you need to keep track of all the books you've previously used and for each combination of that history and new question you have a different book to index to. Ok, wonderful, now we have a working system that does have "mental states" the history of the conversation - via the indexes held in the memory of the person in the room.

willhastings
Автор

Replace the book with a Chinese speaker who tells the person in the room how to write the response. Assume the responses are the same as the book. Now use the same argument, the man writing the symbols doesn’t understand Chinese, so there’s no understanding happening in that room, right? If it can’t be in the book, then it can’t be in the Chinese speaker either.

saritsotangkur
Автор

I would say the computer is thinking. The problem of current model of building AGI is that we have reduced cognition to computation (thinking). AGI has no way of cultivating wisdom to afford relevance realization.

mariog
Автор

Very interesting. Now what is understanding? Seems just a product of a complex self aware brain. Do animals have understanding or just input/output?

JerryPenna
Автор

There is nothing it is like to be the computer.

timjonesvideos
Автор

The will to life is in charge. We are still just robots and consciousness is an emergent property. For example if you assemble products at work and one falls off the table you catch it mid fall to the floor and think “wow I have fast reflexes” in reality the will to life is in charge. It knows broken items hitting the floor could lead to a loss in opportunity in getting a raise and therefore a potential lower standard of living. The will to life won’t allow this to happen and you catch the item against your will before it hits the floor and breaks. It feels like you caught it but your body did it as an automatic response. Same could be said for driving. Your body won’t allow you to crash because you will die. The emotions and sensations that make you feel like your controlling the car are emergent as is the consciousness it’s self. In the example here of the Chinese room the person inside the room wouldn’t interact with the outside Chinese if there was an absolute zero difference in quality of life of the person in the room. Being socially un awkward would be seen as a survival enhancement for the participant in the room and therefore the body would automatically play along and try not to be detected.

zyxwfish
Автор

human understands it doesn't understand

demonhead
Автор

2:03 you're leaving out one key component here. The person who wrote the book. The room has a pseudo consiousness. The person (or people) who put together this book are consious beings who put thought into responses and have total understanding of the inputs. This "pseudo consiousness" is just being enacted by the English only speaker in the room. A consious being has already responded to those inputs, even if they're not physically present.

MikkiPike