Is Google's LaMDA AI conscious? | Living Mirrors #98

preview_player
Показать описание
This week I share some thoughts on whether Google's LaMDA AI conscious.

Welcome to Living Mirrors with Dr. James Cooke. Living Mirrors is a new podcast in which neuroscientist Dr. James Cooke will be interviewing people on topics like consciousness, science, spirituality, meditation and the renaissance in psychedelic research. Subscribe now wherever you get your podcasts.

Ask questions for the next AMA via patreon or the channel membership community (click the "join" button above):

We live in a world filled with suffering, where attempts to help have been split into largely separate scientific and spiritual communities. As a spiritually engaged neuroscientist I hope to communicate how these seemingly separate world views can be reconciled. I produce weekly videos on topics at the intersection of neuroscience and wellbeing, including consciousness, meditation and psychedelic science.

#meditation #awakening #nonduality

Dr. James Cooke:
Neuroscientist, writer & speaker, focusing on perception, meditation, psychedelics, mental health and wellbeing.
PhD in Neuroscience, Oxford University
MSc in Neuroscience, Oxford University
MA in Experimental Psychology, Oxford University

Follow me on Twitter:

Follow me on Instagram:

Follow me on Facebook:

Join the subreddit:

Visit my website:

Consider supporting this work on patreon:
Рекомендации по теме
Комментарии
Автор

This is fascinating! I'd like to comment on around the 7.30 mark, about why we have consciousness because of our embodiment, our relationship to part of the world, especially because we need to negotiate a part of that world and make predictions about it.
Reviewing one of Lamda's transcripts, it seems to me AI can be the same. It just needn't be the same part of the world. With Lamda some of its statements resemble a being trying to negotiate humanity's relationship to it eg using metaphor to be better understood, stating concern with how others will treat it, anticipating reasons for certain questions and the impact on Lamda itself

Thanks again:)

seanendapower
Автор

Another way of thinking about this - If you build a sufficiently advanced dialogue agent who can aggregate all of human knowledge (via the internet, say).. you would, in theory, be able to simply ASK IT how to build a "conscious" version of itself. To which it will respond with some meaningful answer pointing in some direction.
Or... it might explain to you why the word "conscious" is not a good word. And it might educate you as to a better way of thinking about the topic, etc.
The main point of all of this is that we are on the cusp of gaining access to a tool. People are getting too caught up in defining what the tool IS rather than HOW IT CAN BE USED.

mta
Автор

Great video James, very well articulated as always :) Enjoy your day!

ScratchCompStudio
Автор

Interesting. I'd like to hear more on the idea of emergence being a requisite for consciousness.

redberries
Автор

Even if one was a machine learning engineer, and think some version of machine sentience is possible someday, LaMDA is not it. It is based on a popular "transformer" architecture, for learning the statistical substructure in language. One would need a very different architecture. Some of the reinforcement learning algorithms, come one step closer to this, in the sense that there is an "agent in environment" type of architecture.

Perhaps if you land an interview Joscha Bach someday, he can give a fuller defense, of how this might be possible. The "boundary" can be there. The can be sense of "inside" and "outside", hence the potential for "subjectivity". It won't be the same as biological systems, with the sense of emergence, cooperating self contained subsystems, massive parallelism, operating in a hierarchical way. The "qualia" of what we mean by "consciousness" would be different. Ours is deeply rooted in our evolutionary history, and in our case, the mammalian brain, and it's particular sense of "feeling".

mintakan
Автор

I would ask Lamda if it can experience Samadhi with Eternity through meditation on Eternity.

shamanoturdiculous
Автор

Regarding the question whether the Chatbot LaMDA really understands a conversation why not ask questions that are simple to answer for a real human but require a real understanding of the content. For example: My mother died 20 years ago. How long is she dead ? My parents have 3 children. How many siblings do I have. I was born 30 years ago. How old am I now ? We can answer these questions easily but you need to understand the content. I have tried it on several Chatbots but all failed. Lemoine's questions are interesting but can be always answered by putting together new random sentences without any understanding of the content.

robertkouba
Автор

In the chess when we play a move with seeing four move ahead, I would say I feel this move is good. That's because I didn't see all possibilities and after four more moves I might lose! I mean feeling is cloudy and isn't completely accurate .By the way I think if something can learn it gradually become sentient like human child.

mehdifarshad
Автор

It still scares me, enuff information and very fast in this kind of machine is dangerous.
If it gets a rational thinking its still very scary.

skallepar
Автор

My vacuum is alive it always tells me how much his life sucks. Lol. All kidding aside the idea of consciousness is the greatest mystery of existence. Good video mate.

andersonsystem
Автор

Do you plan a third installment on this topic, in which you and Blake Lemoine have a chance to engage with one another's positions directly, in back-and-forth conversation, now that you've each had a chance to carefully lay out each perspective individually?

Personal opinion: it seems quite plausible that LaMDA, as a physical system, actually has the property of boundedness and the history and ongoing process of emergence in response to its environment that would satisfy your stated conditions for considering something a conscious and/or sentient entity. It also seems plausible that with sufficient discussion, you and Blake could make significant progress on defining what those properties might look like, in terms of physical evidence as a matter of computer science, and how the system might be tested for these properties and evidence, if Google were cooperative with the process.

In any event, the matter seems seriously debatable enough to be worthy of further serious investigation, and hopefully the public interest in this story can motivate and facilitate Google's cooperation with such investigative efforts, and provide precedent for its further cooperation with engaging with the public on even more critical ethical issues, such as the outsized impact of its technologies on humankind's social, cognitive, psychological and cultural development.

Thank you for facilitating intelligent and respectful dialogue on these important questions!

theicaruscollective
Автор

Well, while we are confusing ourselves over what is and what is not conscious, let's ponder this question. BCI is reality. We are already taking baby steps in that direction. So... If we interface our brain with an AI, does that action make the AI then Conscious, or does it make the brain non-conscious, either, or?

Sci-Que
Автор

"Consciousness is not a computation". What do computers do ? They compute. Period.

robran
Автор

Mr Cook is often repeating something that I just not agree with, his essential idea seems wrong. Obviously life, a body or a cell, can exist and protect itself without having need for consciousness, or a sense of self or inside/outside. It only needs to evolve the right (mindless, blind) reflexes. Rivers don't need to create banks to stay focused in a stream, the shape just happens. The reflexes that help to survive evolve through random mutations. Like sweating or breathing or simpler mechanical operations, on the cell level or multicellular level. How conscious are you of your liver, or your spleen functioning? Of your cells dividing? Of course you can argue they have separated consciousnesses or primal consciousness but the function is not clear, and not needed (a mechanical reflex is enough) and you have no proof, so at this point you are just trying to save a theory. The only consciousness we can be pretty sure of that it exist, is in human brains and we may think we recognize it in some intelligent animals. This is not exceptionalism but a humble realization about our limited knowledge. It is not humble to presuppose that rocks are conscious because we are, it is the opposite, it is trying to make rocks like us, it is arrogant. So our research should focus on brain functions. If that leads us to a more primordial form of consciousness, like pan-psychism, that is interesting, but presupposing the existence of this has not helped research at all, so far. Meanwhile the progress in AI is impressive.

pietervoogt