Are Large Language Models Conscious?

preview_player
Показать описание
As the conversation turns to AI, the group contemplates the future of artificial consciousness. Could large language models, like today's AI, eventually develop consciousness? If not now, will they in the near future? And if we can simulate human consciousness in machines, are we creating philosophical zombies—entities that behave like us but lack awareness?

FOLLOW or SUBSCRIBE to StarTalk:
Рекомендации по теме
Комментарии
Автор

Chuck really doesn't get enough credit, his question and supposition here (7:24) is really intelligent and thought provoking.

Nefville
Автор

Chuck makes an AMAZING point. 7:30-7:50

DarkWatch.
Автор

The goalposts move because in the act of reaching the goalpost we become aware of new goalposts that we previously neglected to specify. For example, why would they add "has a self image" or "can demonstrate empathy" when they were still trying to get it to type like a human? It's like building a house: who cares what pictures are on the wall when you don't even have a roof yet?

ja
Автор

Dreaming is honestly highly underrated. It can be some of the most fun you've ever had and can be very beneficial to most areas of life, especially if you get into dreamwork like lucid dreaming and dream incubation

shadw
Автор

9:48 farewell chuck, he laughed so hard he became pure energy

pranav_indoria
Автор

The Turring test is to Artificial Intelligence as the Howey test is to digital assets. They are both outdated and should not be the standard.

amd
Автор

We can't even define consciousness.

egonkirchof
Автор

I think it’s ridiculous to say we are the only animal that’s conscious.

RobotronOG
Автор

We're at a point where Neil offers the least compelling inputs in any given conversation. Chuck has improved his critical thinking and questioning such that it is more useful to Neil's (perceived) knowledge.

BillHawkins-bv
Автор

I've often wondered if our brain is fully optimized (are there any better biological processes that could make a brain "run" more quickly etc?). I imagine that once we work out how it actually works, science is going to work out a way to engineer one that's even more capable.

scratchanitch
Автор

The constantly shifting goalposts for passing the Turing Test, driven by technological advancements, reflect a deeper issue: the absence of a clear, universally accepted definition of consciousness. This ongoing debate underscores the elusive and complex nature of the concept. It’s for this reason that discussions about consciousness often resemble the story of people in total darkness trying to describe an elephant by touch alone; each grasping a part but missing the whole.

drtariqhabib
Автор

We will know AI is conscious when it flinches when we go try to unplug it.

robspecht
Автор

I don't understand how we can be so certain that we are the only critters with consciousness when we can't describe what it actually is. Did i miss something? Can we prove scientifically that other creatures are not conscious? Would love to see what other startalkers think!

LeviSkinner-ln
Автор

Really enjoyed this one. Great questions and philosophising all round.

exert
Автор

Why is anybody impressed by the Turing test? It does nothing to tell you whether the computer is conscious. It just tells you about your own credulity.

donepearce
Автор

You can't answer the question of whether generative transformer AIs are conscious without delving into the mathematics that drive it.

These are not word finders, they are high dimensional continuous function finders. The question is, are we ALSO high dimensional continuous function finders?

Bringing a neural scientist to discuss AI is the wrong choice. This is a math question.

deepmind
Автор

In my opinion "the Consciousness" is the basic funtion of brain stem which just underneath our large brain and which we share with other complex animals. This is the part responsible for all our necessary feelings such as hunger, thirst, fear, anger and attraction to opposite sex! What AI has achieved so far is to kind of reconstruct the outer parts of our brain which we use to gather and save information which we humans gather bei the process of learning. This is a fantastic achievement but please go for the very interessting and actually simpler part of our brain.
The problem here is going to be that by doing this we are going to create many conscious individual systems which mabe will share a central knowledge AI system.
So I will start counting the days now! OK?

homayounvahdani
Автор

I do not believe consiousness exists. If you can make a machine with complex circuitry exactly as human brain, it will likely behave, think, compute, even dream as humans.

SameerAli-qwhn
Автор

For me the two things for AI to become conscious are:
- 1. Memory, like he mentioned. GPT etc. can do it, but it's very ad hoc.
- 2. Independant thought processes. o1 can do this on command, then it stops. It's not that it's not possible, it's an energy question. A "conscious" AI model would be sitting there actively engaging in a chain of thought and actions without human input every time. But 1. this at a sufficient rate could completely go out of hand, and 2. this would require huge amounts of energy.

I wouldn't be surprised if there are internal models at OpenAI etc. that are already able to do this.

spadaacca
Автор

Dr. Tyson, please interview Dr. Jeff Hawkins. Specifically, his perspective and research on intelligence and consciousness. It is his, and my belief personal, that our understanding of intelligence is inaccurate, and we are contextualizing the human experience around an incoherent model of consciousness.

flochfitness