Chinese Room Replies

preview_player
Показать описание
This is the second part of a two-part series on John Searle's famous Chinese Room argument against "Strong AI"
Рекомендации по теме
Комментарии
Автор

The system doesn't understand chinese either.
Machine learning is just statistics, what word (token) is more likely to follow another.
The system can calculate the probability of one symbol next to another symbol, yet it doesn't know what the symbols mean.
That's how text prediction works.

ChristianIce
Автор

Eventually over time the person in the room would learn

wonder
Автор

How do we know the man in room didn't know the language. He can say whatever he would like

wonder
Автор

I still find it difficult to convert a digital, binary existence into a non-binary, analog one, given the limits in interpreting digital "On" or "Off" states as varying degrees of quantitative information derived through analog states. There are other, more philosophical objections in the capacity of AI to experience a corresponding emotional effect from any genuine aesthetic appreciation of art, literature, theater, films, or beauty in general. Emotional content is important for human consciousness to establish trust and bonds with others, but also the assessment of others mental states and the conditions to modify behavior and responses appropriately.

So as long as the AI is dependent on a digital, binary hardware it's only means of implementation is through software only. This digital, binary method is always going to necessarily be calculated using Boolean Algebraic Equations resulting on a "True" or "False" statement defined by a "1" or "0" stored in both ROM chips as BIOS, active RAM memory, and stored on a hard drive or Solid State Drive. And the data the AI uses is fundamentally a sequence of positive and negative charges in memory or on a physical medium.

I am aware that the binary content of data is converted into machine language using hexidecimal, which higher level processing like BIOS uses, and then even higher up the scale by programming language which uses the machine language as well, and software applications themselves have higher functional uses of programming language. It is in the software and programming level that AI is even possible to any satisfactory degree, but it is also largely a predetermined environment of algorithmic pathways leading to logic gates of "true" or "false" conclusions the AI must use to determine subsequent behavior or thought. These "ON" and "OFF" states merely determine the next algorithmic pathway relevant to the conditions resulting from the previous decisions for the AI to compile new data for possible answers to the next "true" or "false" decision it must make.

Even if within the software you expanded the AI's choices to more than true or false boolean binary equations, like a choice of restaurants to take his AI date on, conceivably any restaurant close, the AI will make a series of binary, true or false decisions to arrive at that choice. One might be part of it's social programming which suggesting asking your date what her choice might be. So the first binary response is "Should I ask my date for her opinion?" T or F. The social programming algorithm answers "TRUE" and so the active algorithm switches to *ASK DATE HER CHOICE OF RESTAURANT* mode, with a variety of predetermined solutions depending on her response. If she comes back with she doesn't care and that you should now decide, then the AI switches to a restaurant selection process using predetermined variables to determine quality and possibility of each, like proximity, expense, atmosphere, anticipated reaction of date, ect. Each of these decisions will be based on a "True" or "False" response to a a set of predetermined variables such as "Is distance no more than 20 miles" or "Has date shown interest in food before?" or "Has date expressed a liking for reported atmosphere?" and all of these factors must be considered, as a human would, to be anywhere near true sentience and awareness. On the fly. In realtime.

I don't think we are going to get the 1's and 0's to handle it...

michaelhoward
Автор

It doesn’t matter if the man understands Chinese. Imagine replacing the code book of instruction with a Chinese woman. She gives the man all the same instructions as he got from the code book before, & he puts out replies. Obviously, it is the woman who understands Chinese. It's the instructions themselves (the code of the computer) that can be said to understand Chinese & to be intelligent.

Tysto
Автор

Both of the responses laid out here seem to amount to "A computational system can't genuinely understand Chinese because computation doesn't count as understanding". The second one just states this outright. The first one seems to say that if the man speaks Chinese by internalizing everything from the Chinese room, he may, in all cases, do exactly what someone who genuinely understands Chinese would do, but he doesn't genuinely understand Chinese because of the way he's doing it. Searle is just taking all the information that constitutes "understanding Chinese" and imagining a way to structure it that doesn't seem like "genuine" understanding to him.

plateoshrimp
Автор

I have a novel objection to the Chinese room, or at least, I haven't seen this objection before. The analogy itself smuggles in a lot of assumptions that amount to question begging. One easy way to see this is to notice that the description of the chinese room could fairly be mapped to the human brain itself - composed of unthinking, non-conscious neurons. It is only the whole, the neurons working together, that produces consciousness as an emergent property. And the proponent should be willing to admit they don't possess conscious understanding, because they are composed of essentially something very much like a biological database of information. It would seem to prove that humans cannot be conscious, as well.

It also seems to smuggle in the concept of a self, which is often a subject of debate between the camps who discuss these ideas. I do not believe the self is an entity that exists. It is a type of illusion that is generated during conscious awareness. The chinese room thought experiment very much seems to take the concept of a self as a given, because it is implicitly intended to be contrasted against the automaton in the Chinese room.

And my final objection is something of a technical nature. Artificial neural networks are not databases. I realize the chinese room was authored before the neural network explosion, but to compare an ANN to a database is ridiculous. Deep networks develop "mental models" of things that they "think" about, much like biological networks do. Most people don't really understand what an artificial neural network is, and the chinese room at BEST is an argument that a database cannot be aware, no matter how sophisticated it is. It doesn't even begin to support the argument that ANNs cannot be aware/conscious, because they are not based on rote/rule-based lookup like a database is. They incorporate sophisticated ad-hoc logical reasoning abilities.

The technical equivalent of the neural net chinese room would be if you gave a billion people a billion years to work out the problem of chinese in a logical way, based on lots of examples, and then you locked away the best of those billion translators in the chinese room. That person would not only understand chinese, they would understand chinese better than any person who ever lived. ANNs are not simply databases.

ConsciousExpression