What Does The Chinese Room Argument Actually Prove? w/ Dr. Jim Madden

preview_player
Показать описание
The Chinese Room argument is often taken to show that the theory of human minds being computer like is refuted. Dr. Jim briefly explains the argument while outlining what he takes its limitations to be.

Please like, comment, share, and subscribe.

Рекомендации по теме
Комментарии
Автор

Forgive me if I've forgotten something, but the context of the argument, at least in the 1980 (ish?) paper, was the manipulation of formal symbols in computers wasn't and can't be the same as syntactic or semantic appreciation. Therefore the Turing Test is insufficient to identify intelligence.

I think the argument (insofar as a thought experiment is an argument) works. The response is inadequate because there's no reason or explanation for what it means for the system to actually grasp and appreciate the real meaning of words and propositions. Even if no part of my brain understands language, it still somehow follows that my whole person understands language. But the Chinese Room is an analog of a computer, not a person. THAT is the point. Those who want to reduce intelligence to computation and manipulation of formal symbols can't get to real intelligence. Again, I think Searle's point works as intended here.

GulfsideMinistries
Автор

That's not good clarification. The man was unintelligible, barely uttered a single complete sentence. It's ironic, because the Chinese Room example was to show that mere rule-following, such as what a digital computer does, and what the man in the room does - a purely syntactical operation - cannot achieve understanding, precisely because it lacks semantic content. Dr. Madden didn't demonstrate much semantic content either.

geoffwhite
Автор

I think Searle responded that if you argue "the system" or "the whole" understands, then you can adjust the scenario, for example by telling the person to memorize the signs and to draw other signs in response. Being told what to do is different from also understanding what you do.
It's simply an argument to establish that a performance that traditionally required intelligence does not establish that the thing behind it all is also intelligent in the same way.

HoTTDooDleZ
Автор

The Chinese room experiment assumes a reductionist approach to semantics. It assumes that the syntax rules themselves contain the semantics. But the semantics are an emergent characteristic of the syntax. The semantics is the behaviour itself, not the elements that produce this behaviour. For example, the interactions between the neurons in your brain can be classified as syntax, but each neuron does not have a conscious understanding. Consciousness is an emergent characteristic from the interaction between the neurons. In the Chinese Room experiment, it is not the people carrying out the symbol manipulation who understand Chinese. It is the emergent behaviour that understands Chinese.

But what about Searle's argument that digital computers specifically can not create consciousness. It depends on the program running on the digital computer. If it's a conventional deterministic program then I agree that consciousness cannot arise from it. But if you run a neural network which is a pseudo deterministic program, then perhaps consciousness can arise from that. But even a neural network running on a digital computer is, at its core, blind syntactic symbol manipulation (a Turing Machine).

Godel's Incompleteness Theorems are relevant to this discussion. Any mathematical formal system is comprised of axioms and theorems. The theorems are produced from the axioms or from other theorems according to the syntactic rules of the formal system. But for some formal systems, a peculiar thing happens. Some of the true theorems of the system cannot be arrived at step by step from the initial axioms and syntactic rules. Another way of saying this is that these theorems are unprovable within the system (by using only the axioms and syntactic rules of the system). This is equivalent to saying that the formal system is unaware of the semantics of these unprovable theorems that emerge from itself. The provable theorems are analogous to the conventional deterministic programs running on a digital computer. the unprovable theorems are analogous to nondeterminstic Neural Networks running on a digital computer.

EinSofQuester
Автор

Eh? Your brain doesn't understand English? Well I don't think it understands the point of Searle's argument, for sure.

johnward