Artificial Intelligence & Personhood: Crash Course Philosophy #23

preview_player
Показать описание
Today Hank explores artificial intelligence, including weak AI and strong AI, and the various ways that thinkers have tried to define strong AI including the Turing Test, and John Searle’s response to the Turing Test, the Chinese Room. Hank also tries to figure out one of the more personally daunting questions yet: is his brother John a robot?

--

--

Crash Course Philosophy is sponsored by Squarespace.

--

Want to find Crash Course elsewhere on the internet?

Рекомендации по теме
Комментарии
Автор

Hank warmed my heart when he said that even if John bled motor oil instead of blood he would still be his brother.

Crazyvale
Автор

It's amazing how a machine can be programmed to have conversations that are so human-like that it becomes difficult to distinguish them from actual human interactions. It really makes you question what it means to be a person and whether or not a machine can ever truly achieve personhood. It's definitely a topic that raises some thought-provoking philosophical questions.
- ChatGPT, 6 years later

ajs
Автор

6:58
"do you know how to speak chinese?"
"yes! i do know how to speak chinese"

if anyone wanted to know

AtenaHena
Автор

A harder test would be, can it fool itself into thinking that it is a person?

SlipperyTeeth
Автор

If a robot is ever considered a person, would it be considered immoral to turn it off or otherwise remove it's power source?

schmittelt
Автор

You know, I think that any AI that displayed a degree of laziness would probably pass as a person.

thewolfofthestars
Автор

crash course is too addictive. all I've done is intermittently sleep and watch crash course philosophy all day. I haven't gone too my classes and all I've consumed is tea apples, Eminem's and crisps.

MingusTale
Автор

Living in the era of ChatGPT, it is quite alarming to look back at this video.

shawn-xlii
Автор

There actually have been a few AIs that have passed the turing test before. However, all of them have used some sort of "cheating" to do so, such as programming the computer to always go back to a subject it knows a lot about, or telling the human running the test that the computer has schizophrenia, or even something as simple as forcing the computer to make spelling errors.

ZacharyBurr
Автор

"What would be missing for a AI to be person-like but not a person?"

I think the answer lies in consciousness (as opposed to, say, the idea of the soul). Is the AI *conscious*? An AI that passes the Turing Test could easily pass as being person like, but lack consciousness.

How do we figure out if an AI is conscious? I think this is the big question, and I have no idea. Can we even build a conscious AI? Can consciousness arise from man-made, inorganic, "artificial" processes? I'd assume theoretically, it could. Practically, however, we may never get there.

Linkous
Автор

Does John leave the cap off the toothpaste?
Do you know who leaves the cap off the toothpaste? a synth, that's who!

MagraveOfGA
Автор

Reminds me of the Star Trek episode when Picard has to try and show Data (the android) is sentient being with the right to choose.

_Aly__
Автор

Hanks should've at least pointed out the distinction between information processing (i.e. intelligence) and conscious experience. It seems pretty obvious to me that person vs non-personhood will go down to whether we think it has conscious experience.

Most scientists do not believe that our computers (based on the 'Von Neuman architecture') could give rise to conscious experience. No matter how generally intelligent Siri becomes, she's still as conscious as a rock. A sentient machine can only be made once we figure out what sort of complex processing of information actually gives rise to conscious experience. Then, we can build the hardware for an artificial consciousness.

ASLUHLUHC
Автор

+CrashCourse The response you made to the Chinese Room is the main response to this argument and it's known as the, “Systems Response.”
It goes like this; the person in the room doesn’t understand Chinese, but that person is part of a system, and the system as a whole does understand it. We attribute understanding not to the individual man, but to the entire room.
Well Searle responds by saying: why is it that the person in the room doesn’t understand Chinese? Because the person has no way to attach meaning to symbols. So in this regard, the room has no resources that the person doesn’t have. So if the person has no way to attach meaning to the symbols, how could the room as a whole possibly have a way to do this? Searle himself suggests an extension to the thought experiment: imagine the person in the room memorises the database and the rule book. This means he now doesn’t need to the room anymore. He goes out and converses with people face-to-face in Chinese, but he still doesn’t understand Chinese, because all he’s doing is manipulating symbols. Yet in this case he is the entire system.
Now of course an obvious objection to this is that if you can go out and converse with people in Chinese, you must be able to converse in Chinese and thus understand it.
This objection the functionalist could make doesn’t actually addresses Searle’s point though. The whole point of the Chinese Room Thought Experiment is that you can’t generate understanding simply by running the right program.
You can’t get semantics merely from the right syntax. Now granted you surely would understand Chinese if you could converse perfectly with Chinese people, but I think Searle can hold that this understanding arises not from manipulating symbols in the right way, but also from all the various things that can go on in face-to-face interactions.

CosmicFaust
Автор

bedore even watching this past the intro, i'd say go full circle, aren't we all just really intelligent machines?

SuperExodian
Автор

I feel that you should have addressed the consciousness question. Is their a subjective experience to being the strong AI? Is that what separates us (as opposed to souls) from a machine that simulates intelligence? Or does it matter to deciding if something is an actual AI? For me all the most interesting questions about personhood and AI in general surrounded consciousness.

perfectzero
Автор

"If it turns out that John, the John I've known my entire life, has motor oil instead of blood inside of him. Well, he'd still be my brother."

fluteroops
Автор

Hank: "How can I figure out my brother is a robot or not?"
Me: "Crack his skull open and feat on the goo inside?"
Hank: "Without going into his mind or body."
Me: :(

qpid
Автор

7:20, in flash philosophy it says "对!我可以说中国话!" and that means "Yes! I can speak China language" great job...lol

MrDoob-xosm
Автор

I like the approach of Jack Cohen and Ian Stewart in "The collapse of chaos": They suggest that the mind is an emergent property, a process that is created by a certain arrangement of neurons.
It is like the motion of a car, something abstract and not material. If you would "dissect" a car, you will find wheels, an engine etc but not a tiny bit of motion. The same applies to our brain/mind.

So, in my opiniom, if we can create something similiar like neurons (and not only neurons, of course, more like a certain arrangement), we could create a mind as well. But it takes probably a few more years until they are on the same level as we already are.

moch