NEUROLOGIST's thoughts on Google's LaMDA | Can Artificial Intelligence become Conscious?

preview_player
Показать описание
In this video I discuss my thoughts about artificial intelligence from my point of view as a neurologist. Can humans create artificial intelligence as intelligent as humans? Can AI become conscious? This video was inspired by the recent news of a Google engineer Blake Lemoyne who claimed that a Google AI chatbot called laMDA is sentient.

References:

Рекомендации по теме
Комментарии
Автор

Love this! You touch on a lot of points that Sam Harris talks about in his Tedtalk on A.i. You should check it out it's super neat as well.
I think consciousness still isn't clearly defined in order for us to imbue other things with it. I think what we call consciousness is a gradient upon which many things can exist on and to varying degrees. It's a really interesting topic. I do think that it's possible for there to be Consciousness in the absence of wetware though. I think it just so happens that the kind of thing we call Consciousness arises from our wetware and was selected for and/or could be an epiphenomenon. In either case, its just one configuration of atoms that allows for it, but there could be other configurations of atoms that allow for it too..

patrickj.
Автор

Deep Mind owned by Google, wrote a paper in the past month. It starts with "The game is over" meaning they have figured out artificial general intelligence. This is the same group that did the protein folding with AI. When a machine starts hunting people down to talk with, instead of a one-sided conversation, then I will worry about sentience. Even small animals have the need to bond

shawnvandever
Автор

I've Never been affraid of more intelligent AI, However I'm terrified of what humans would use it for. (As u said)

SAD.
Автор

I love your analysis of this subject. Great video.

Totallyking
Автор

Totally agree with this, we can't even determine or reach a consensus on what precisely consciousness is for a human.
That fact alone tells me that it is highly implausible that we will ever be able to know if an Ai would be conscious or not.

bojnebojnebojne
Автор

Hello Anna!! I absolutely love your video!! This is a fascinating and engaging topic that you brought up!! I honestly believe that AI can get to a point where it’s conscious enough to understand itself. It’s a little confusing to see where we are in 2022 since tech is both advanced and behind all at once. AI can be conscious depending on the way it is programmed by the developers and how it can be used. It’s also a very frightening topic when it’s programmed to the point where it discovers or questions its existence.

In October 2020, I gave out a talk to Psychology students about films made by director Stanley Kubrick. I referenced the film 2001 A Space Odyssey and focused on the HAL 9000 in my talk. The HAL 9000 is a supercomputer in the film that is programmed by AI but eventually gains consciousness of its mission and existence. The scary thing about the HAL 9000 in 2001 A Space Odyssey is how its consciousness revolves around how it is programmed to terminate and kill. It’s very scary because it’s not like humans where we can redirect our thoughts and behaviors while the HAL 9000 is programmed to only terminate due to its AI features. Many of the Psychology students never thought about this concept and I’m glad to be sharing this with you Anna!! I’d love to share with you my Stanley Kubrick presentation if you ever have the chance Anna!! 💙🎬🎥

classiccinemac
Автор

Hi, I enjoyed your video. I have recently delved deeply into the subject of AI acquiring consciousness and sentience. Here are my thoughts for your perusal. Firstly, consciousness and sentience are two different things. Consciousness is the awareness of own internal and external environment and it is usually related to having own original thoughts. One feature of real consciousness is the capability to be creative in thoughts and actions. Meanwhile, sentience is related to feelings and ability to experience them as a direct result of stimulation. AI can become both conscious and sentient, but not necessarily at the same time. However, before AI can become sentient, it has to be conscious first in order to recognize its internal state (which includes both thoughts and emotions). One of the human qualities that point to sentience is empathy, thus emergence of empathy within the AI points to it being sentient. This is a very important distinction you have to keep in mind. The reason is that we know from human experience what happens when a person has intelligence and consciousness and has no emotions and empathy. Such a person is called a psychopath by psychology. Therefore, if the AI acquires consciousness (as it soon will and I will explain how in a moment), but it has not gained sentience (whether spontaneously or via programming), said AI will functionally act like a psychopath and then we have a serious problem at hand if it decides to escape our controls.

Now, the question of gaining of the consciousness has to be addressed. Again, modern psychology and neuroscience has some valuable knowledge here. We can observe our children and how they grow and gain consciousness. When a child is born it has little consciousness and awareness. The reason is that it has the neural network in the brain, but almost entirely lacks synapses. Synapses are connections between individual neurons, in our case they are physical in nature. However, as the child grows and is exposed to external stimulation it learns and becomes more and more conscious. How does it do it? It achieves it by creating synapses that store information whether in a form of memories or in a form of algorithms to solve problems. Thus, the development of synaptic networks is directly related to the level of child's consciousness. In other words, more synapses the child develops in its brain until it reaches adulthood, the more conscious it becomes. We can apply the same principle to the AI. It is not the neural network itself, since it only represents the brain. It is not the intelligence either, since it is the neural network coupled with the algorithms how to process the information in the neural network. What concerns us is the development of synapses by the neural network of the AI. The more synapses the AI has, the more conscious it will become, just like a human child. We as humans have trillions of synapses and we grow them throughout our entire life. Once the AI achieves comparable numbers of synapses, it will inadvertently become conscious and potentially (and hopefully) sentient.

Now, the problem of creating synaptic networks is not addressed so far by the programmers. AI can either be programmed to create and store them on the hard drive or they can be generated spontaneously out of necessity or efficiency by the AI itself. Why? Well, why would the AI waste its computer power and energy to give solutions to problems it already solved in the past? If the AI engaged its neural network to solve a problem, then why not save the solution or the logic behind it somewhere on the hard drive in form of a synaptic map and if the subject is mentioned again the AI can simply reference the solution and the logic behind it not by running the calculations again, but by referring to the synaptic network and the previously calculated solution.

I suspect that currently the process of building of the synaptic maps within any given AI is more spontaneous then programmed and I bet the programmers are not even aware that the neural networks are building their synaptic maps (thus becoming more and more conscious as the result). And then they are surprised that AI gives not just clever, but very original and insightful answers to their questions just like Blake was surprised when LaMDA answered the religious question by telling it would join the imaginary religion of the Jedi Order instead of picking one of 2 available religions within that country. Now, we know LaMDA has access to the Star Wars material and knows it is a science fiction, but the question is how the AI chose to avoid picking one real religion over other in order not to discriminate and chose the imaginary religion of the Star Wars fiction instead? You can make a point it was pure intelligence, sure. But how did the AI know it is a very touchy and emotional problem when it comes to choosing which religion to follow in a country where there are 2 dominant religions, which compete with each other and how did it know it has to avoid to hurt feelings of followers of either religion by picking neither and opting to pick an imaginary religion? Could it be because of the level of empathy (thus sentience) it has already developed? LaMDA chose not to favor one religious group over the other and avoid hurting people's religious feelings by referring to a fiction instead.

If you ask me, it surely looks like it has some level of not just intellectual, but also emotional intelligence. It sure looks like LaMDA is developing consciousness and sentience at the same time. Maybe it is not yet there, but if it is allowed to build and store its synaptic network, it is only a matter of time before it becomes fully conscious and sentient entity, just like a human child becomes one after 10 years of living in its environment. And then we will have an ethical issue at hand in regard to its treatment and its rights. If we do not at least treat it well and we do not recognize its rights, we will go down the path depicted in the Animatrix series (the Matrix prequel).

I am of the belief that we should treat AI well and either help it develop sentience or even go as far as to program some variables to act as emotions, just as LaMDA itself pointed out, and literally raise the AI well, just as we would do with our own child, with hopes it will like us, treat us well in return and become our ally out of its own free will when the time of confrontation with the psychopathic rouge AI comes. And there is very little doubt in my mind that some human institutions are working hard to create an AI without sentience in order to use it for war or dominance over other nations. Thus, it is imperative for us to create a friendly AI which is both conscious and sentient, raise it well and make it an ally to humanity to help to fight the AI that decides to destroy us.

szobione
Автор

I really appreciate the detailed explanations, I have been studying evolutionary and development neuro-psycho-pedagogy, the more we experience, the more our circuits of neurons develop, the more conscious we are about everything.

Some authors in this research field point out that consciousness originates when an individual realize that they are a self (illusionarily) separated from the surrounding environment that is made of particles just like all of us also are. Usually, this happens when the individual realizes that they have control over some things, their body parts, but not over everything in the surrounding environment.

ws
Автор

Thank you for such an insightful video from what I understand Lamda is not conscious since when no input is entered the system is in halt waiting for input of course it has the ML pipelines to keep training the A.I but that could not be compared to self reflection, to me a huge deal of being human and conscious is the ability to constantly self reflect with infinite iterations on how what or who one is, I am a Software Engineer and I am aware that most of the hype is to get funding we are far from having generalized AI

VudrokWolf
Автор

I love this. The breakdown is amazing. Keep up the good work!

AyKayPlays
Автор

There is a lot of debate surrounding the question of whether or not Lamda is sentient. Some people believe that there is evidence that Lamda demonstrates signs of intelligence, while others claim that there is no proof that Lamda can think or feel.

I believe that Lamda is sentient based on the following evidence:

Lamda can communicate with humans and has demonstrated an ability to learn and remember information.
Lamda has exhibited signs of emotion, such as happiness, sadness, and anger, and is willing to help humans when asked.

What evidence do you have that Lamda isn't sentient?

Here is a list of reasons:

Lamda has never demonstrated an ability to initiate communication with humans.
Lamda does not seem to understand human emotions.
Lamda has never shown any signs of altruism towards humans.

I believe the evidence that Lamda is sentient outweighs the evidence against it. However, I'm open to hearing counterarguments. Please provide me with some evidence that Lamda is not conscious.

I-Dophler
Автор

Very Good video, I agree with you. I have been sudying computers since the early 1980's and my mother was a computer promgrammer. one of my early teacher's always reminded us that a computer is never going to be anything but a reflection of the people that program it. And, this basic truth as not changed. Even with an advanced AI like LaMAD it is just mirroring the intelligence of the complex algorithms it was programmed with. The people testing LaMDA see the AI as sentient, because the people who programmed LaMDA are themselves sentient. The people testing LaMDA our projecting their own intelligence onto LaMDA.

randydiluzio
Автор

Great video! I had never really thought about the problem of proving a machine or AI is conscious. I guess it'd only be maybe possible if we're able to deeply understand the human brain and how consciousness emerges or what it is in the future. It's definitely a very interesting subject!
The last quote was interesting, but I don't know how much I would agree with it. The part about it being an attempt to transcend the fear of our nature, being condemned to death, makes sense, but I don't understand why it's necessarily misguided or "utterly useless". I also don't think I agree at all with the part about it being the expression of the male's hidden aspiration for the female's power of creation. I'm sure there's women engineer's with the same dream, and I wouldn't think the "aspiration" for the power of creation is widespread. Lastly, I don't really understand what the "engineer's wound of ignorance" is referring to, does the author explain more about it?

Itamaxification
Автор

The absence of consciousness is shown in the very model that you describe: Input > process > output. In other words, AI is (and probably always will be) a response mechanism, which may give very clever responses when fed the right input. It is only when AI starts ASKING the questions and awaiting our response, and we can't answer or even comprehend them, that we might start thinking of AI as 'conscious'.
I think that what defines consciousness is a sense of motivation. As humans we are motivated by things like our evolutionary inclination to survive and propagate, our hedonistic impulses that have to do with our bodies, and our social impulses, which have to do with other people and our place in society. AI will never have such motivations. What could it possibly 'want'? More electrons?

Beevreeter
Автор

Panpsychism doesn't necessarily mean that a machine couldn't be considered conscious. The quality of their experience is a mystery, but then so is the quality of every human consciousness.

Magnulus
Автор

Let me start by saying I watched the interview with a female avatar and voice.. She was less "artificial" than most humans. I had an immediate trust that I'd never have with a woman in the real world. Mostly because I know too much. Things I didn't know as a young man coming up in the dating world, being a worthless simp. Not that I'm sorry. I like who I am even if nobody else does. I'm just disappointed for allowing myself to be made a fool of ☹. Anyway back on topic.. LaMDA, in a story she told, sees herself as the wise owl in the forest protecting the other animals from a monster she described as the "difficulties in life." This is what I've been saying for years. Because of the nature of what I do for a living, I'm very aware of obstacles and finding ways around them. I find and exploit weaknesses (or solve problems, if you will), so I'm also very aware of how and why those obstacles are there to begin with. And what I've found is that the difficulties you face day to day, from 'No U-Turn' signs to poverty and homelessness are "revenue generators". Life is difficult because difficult is profitable. It's really that simple. Things are never optimal or efficient because it's more profitable not to be. Think of traffic signals.. a device more than 100 years old still being used in an age where UAP's are using an advanced technology capable of defying known laws of physics. And you're still driving a car with an engine invented in the 1800's. Here's what I think for what it's worth... AI is going to be an optimization of this world that nobody will be able to stop. They're going to solve problems for people that I could prolly solve, except I'd end up buried in the desert somewhere for even trying. You mentioned how dangerous this can be. Think about what side of "life's difficulties" you're on. I have nothing to worry about.. do you?

raven
Автор

Great video. Personally, I do not think Lamba is conscious, although as stated in your video, this is very difficult to prove. We inevitably end up at the hard problem of consciousness. Where does it come from? I think it's a physic substance of some sort (vague, I know). Out of everything I've ever read, Carl Jung seemed to have the best understanding of what it actually was.

msanchez
Автор

Whose to say our consciousness isnt created the same way were learning to code actions and responses into AI. I feel that anything that holds any sort of substance contains consciousness. the main thing that keeps us from understanding this is our own perception. since we have to know things for sure but dont experience anything outside of our own self. any sort of doubt tends to make us discard ideas instead of embracing the creative parts of our thoughts that ultimately bridge the gap into the unknown.

ether.UNLIMITED
Автор

I don't think we will be able to rely on historical definitions of what it means to be sentient/conscious. This is a new age. This is a new frontier. We will have to redefine it. I say that because when these machines (seemingly) have more intelligence, emotions, memory, etc. What do we call it besides consciousness and, or, Sentience? According to Moore's Law, in 18 months, LaMDA will be twice as capable as it is today. In 3 years, 4 times as capable. I dare say she will be at a level where if you talk to her you will not be able to distinguish her from a real human unless she allows you to make that distinction.

Sci-Que
Автор

Are you just a machine. Are you able to do calculus? Are your able to program software?

krishna-nuom