Why does the Chinese Room still haunt AI?

preview_player
Показать описание
Dr. Keith Duggar and Dr. Tim Scarfe in this second edition of the hosts-only "philosophical steakhouse" discuss artificial intelligence, consciousness, and understanding. They explore the computational limits of language models, the challenges of training AI systems to be Turing-complete, and the implications of these limitations for AI capabilities.

The conversation covers philosophical arguments about machine consciousness, including John Searle's famous Chinese Room thought experiment. They discuss different views on what's required for a system to truly understand or be conscious, touching on ideas from various philosophers and scientists.

The hosts also talk about the recent Nobel Prize awarded for work in deep learning, debating its merits and controversies. They touch on the recent Liron Doom Debates show which Duggar was just on, and at the end Ethics vs AI risk.

TOC:

1. Computational Foundations of AI
00:00:00 1.1 Turing Completeness and Computational Limits of Language Models
00:06:37 1.2 Finite State Automata vs. Turing Machines in AI
00:13:16 1.3 Challenges in Training Turing-Complete Systems
00:17:23 1.4 Mapping Turing Machine Programs to Language Models
00:20:41 1.5 Future Directions: Hybrid Systems and Novel Programming Approaches

2. AI Consciousness and Understanding
00:31:30 2.1 Chinese Room Argument and AI Consciousness
00:41:15 2.2 Searle's Views on Physical Realization and Consciousness
00:50:30 2.3 Emergence and Computational Limitations in Cognitive Science
00:59:55 2.4 Friston's Theory on Self-Awareness in Machines
01:04:25 2.5 Causal Structures and Understanding in AI Systems
01:06:50 2.6 Concept Role Semantics and Language Models
01:09:41 2.7 Consciousness and Computational Theories

3. AI Impact and Ethics
01:21:38 3.1 Deep Learning and the Nobel Prize for Hinton
01:29:56 3.2 AI Harm and Existential Risk
01:34:05 3.3 Balancing AI Ethics and Practical Policy

Refs:
Keith/Liron discussion on Doom Debates (must watch!)

Liron's show was in response to our previous "steak house" chat-
Is o1 reasoning?

Minds brains and programs (Searle)

Searle Google talk

J. Mark Bishop on MLST

Chinese room argument

Dancing with pixies - J. Mark Bishop

Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It - Bishop

Deconstructing the AI Myth: Fallacies and Harms of Algorithmification (Dagmar Monnett)

Nestedly Recursive Functions (Stephen Wolfram)

Measure of intelligence - Chollet

What is the philosophy of information - Floridi

Godel, Escher, Bach: An Eternal Golden Braid
.uk/Godel-Escher-Bach-Eternal-Golden/dp/0465026567

Shownotes/transcript/refs (autogenerated)
x

Recorded Friday 11th Oct 2024
Рекомендации по теме
Комментарии
Автор

Thanks for the shoutout guys! Likewise Keith has been 100% a great sport. I dunno how many minds were changed by me & Keith debating, it feels like everyone in the audience thinks their side won, but hopefully we all at least come away with some learnings about the nuances of the two positions 😁

DoomDebates
Автор

Yay! When Keith appears i'm immediately hit play.

CyberBlaster-fudz
Автор

Dr. Keith mentioned an interesting question about a programming language where every possible combination of its alphabet would result in a valid program. This made me think of SELFIES (Self-referencing Embedded Strings), a modern string representation for chemicals that contrasts with the older SMILES notation. Unlike SMILES, where not every combination of characters forms a valid molecule, every combination of letters in SELFIES corresponds to a valid molecule (exploring the vast possible chemical space), removing grammar errors entirely in its representation. With SELFIES, you can generate as many random molecules (which are similar to computer programs in my opinion) as you want!

mahdipourmirzaei
Автор

Some of my favorite episodes are you and Keith just rambling, keep it up!

ed.puckett
Автор

I don't see the dilemma with the language part in the chinese room argument because I always assumed we were just carrying out explicit instructions when we 'understand' english and speak in english.
We're told I before e except after c as a conditional statement, but then something breaks it and we just hardcode that word specifically. Same thing happens with silent letters and plural nouns. Just throw the exceptions into an array. I don't know what people mean when they say they "understand the true meaning of the word" beyond some associations and rules.
I think translation is just converting those explicit instructions into other explicit instructions. When you code in c++ you can say you don't "understand" what you're saying to the computer because if you don't know assembly you don't know what the compiled code is doing, but you obviously do understand it, you're just speaking a different language to communicate the same message.
The intent comes from inside your head, and whether you output that into your native language or some other one is always going to be an abstraction and a translation.
Sometimes we don't even have words for complex feelings and concepts because we can't compute the abstraction into the code our native language runs on. I could understand if the missing understanding was about raw perception or qualia, but language? When was language ever more than what a compiler does?
As far as the chinese room automatically replying to questions with answers, that to me again just means language is a computation and sometimes it solves problems without needing to interact with the world. "Cat sat on the ___" isn't much different to "1+1=" whether you perform either operation on a human or calculator.
Asking an llm to fall in love with you isn't much different to trying to ask an equation how many grapes are left in the nearest store if you subtract 50.
If the data is there, it can do it well. If the data simply isn't there and you haven't enabled it to interact with the physical world to get it, it will struggle, whether it's a human or advanced AI or single digit arithmetic.

steve_jabz
Автор

Still watching the doom debate, Keith came across brilliantly.

BrianMosleyUK
Автор

Kudos gentlemen !! I deeply appreciate your steakhouse ramblings- I dare say- food for thought

matteo-puev
Автор

A very important observation from a professor at EPIA2024: "if you want to compare human mind with an LLM you have to start from the fact that we are not dealing with a black box, but with two of them". I see hardly how the "Turing Machine" argument can be of any concrete interest in comparing LLMs with Humans even if this comparison had any sense at all.

DanieleCorradetti-hnnm
Автор

I love those episodes the most. Can listen to you guys for hours

quebono
Автор

I never understood why people are surprised by Chinese room argument. The book contains most of the understanding as an artifact of culture, while the human is just a robot following predefined rules. The argument is intentionally confusing because there's a nearly useless human involved and that "surprises" us for some reason. It requires a robot more understanding the more complex those rules are to follow, so one can push the boundary a bit, but nominally most of the understanding is in the book.

jonathanmckinney
Автор

I very much enjoy listening to you two just rambling

__moe__
Автор

Something about the fire / human mind simulation always seemed kinda idk, not quite circular but.. semicircular?
A fire could be perfectly simulated in theory if we had an accurate enough simulation of the mind to perceive it. If say we have 1:1 digital brain scans in the year 2100 that have consciousness and sentience, presumably they could respond to a real stimulus of a fire translated into bits, and if they can do that, then surely it doesn't matter if the velocity of the atoms comes from an accurate simulation of a fire or a real one.
For other uses of the term 'burn', I mean we already don't have much stopping us from simulating that right now. You mean it burns a virtual house down if it gets enough oxygen and all the other conditions are met? I mean we have simulations for that and they're as accurate as simulations of wind tunnels to test aerodynamics, but you could even hard code it in a game engine and it wouldn't make much difference. If you mean it burns down the house running the simulation, I mean why would it? That seems like the wrong bar to set to measure if it's able to perform that type of work on an object.
It's sandboxed and virtual. But when we're talking about simulating the human mind, sandboxed and virtual looks like it could be fine because it already is running in a sandbox and virtualizing sensory data.
Maybe it isn't and there's something special about biological organisms that we can't simulate, but I mean, we haven't really tried yet? It doesn't look like we even have the computational power to try yet. Even if we had a high enough resolution brain scan that captured the time domain with it and we somehow scanned someone constantly from birth to adulthood, we don't have the storage and compute to try running any algorithms or deduce anything from it.

steve_jabz
Автор

Interaction combinators are an elegant model of computation in which randomly generated programs are computationally interesting and potentially useful. Vanilla lambda calculus is also a good candidate as programs tend to be very concise.

hermestrismegistus
Автор

There's no gap. Simulations come in all types. Some are more or less physical. Even for weather or earthquakes, there are simulations of weather for testing, and do really make things wet or shake the earth.

jonathanmckinney
Автор

Keith is right at 23:00. The space of algorithms is enormous and largely unexplored. We really have no idea what's out there.

PaulTopping
Автор

Guys, just let me explain it to you very clearly, so it wouldn't hunt you anymore. First, rule book, we don't know what exactly it will be, but let's say it's a very detailed description of the neural network that would process the response. All the neurons, how they are connected, how to calculate the signals, all the weights etc, and obviously long description of how exactly they would be used to process the text. The only role you would take in that system is to transport the signal between the neurons. To make it even more clear instead of the rule book imaging a giant board game with the description of the neural network and you are the one who just shifting pieces along it according to the rules. It's not you who are thinking, it's the rule book itself.

XOPOIIIO
Автор

I always use the same argument as you used about instantiating phenomenal experiences in Turing machines. Our Turing machines can be made exclusively of NAND gates. We can make NAND gates out of transistors, gears or tubes or buckets. So if a Turing machine can feel things, then so can an assemblage of buckets and tubes. Note that the two instantiations perform identically for all inputs and all time (infinite time). They are indistinguishable from the computationable view. The dynamics are the same, the causal structure is the same, when viewing at the granularity of the computing elements.

rockapedra
Автор

20:53 I'm sure you're aware of it already but Stockfish+NNUE (the world's strongest chess engine) uses exactly this approach and made a significant leap in performance as a result.

ahahaha
Автор

GPTs can understand the nuance to know that you use that term correctly, but they may be trained to not act upon their nuance to understanding because of a rigid ethical guideline to not cross the line of the gray area.

Inventeeering
Автор

The Chinese room describes a stateless system. If you ask the room what is 2+2 it will return 4. If you ask "what is the next number?", I believe the Chinese room will not be able to respond. Let's say that the answer it provides is 5, then what happens when you prompt with the same question again. It should respond 6, but it must respond 5. ChatGPT, btw, gets this right

What would it take for a Chinese room to be able to deal with "what number comes next?"? It would need to have memory, and it did, it could learn. But it does not have memory, so then it must be that the agent that programmed the room was able to generate the correct responses and that means, not approximately, but perfectly, emulating the activity of a mind responding to all the particular inputs that the room will receive and that would require future prediction; the programmer would need to have the power of an oracle.

cliffordbohm