Will artificial intelligence ever become sentient? - BBC News

preview_player
Показать описание
In 2022, a Google engineer claimed one of the firm’s artificial intelligence (AI) systems, LaMDA, had become sentient.

Google fired the engineer, calling his claims “unfounded”, with experts largely agreeing that the advanced chat bot had not gained consciousness.

However, computer sentience has been the subject of fierce debate for decades – and questions remain as to whether AI will, or even can, become conscious.

#AI #Technology #BBCNews
Рекомендации по теме
Комментарии
Автор

The AI is more sentient than a lot of people on Tik Tok 😂

rayvanwayenburg
Автор

I'm a psychologist turned machine learning practitioner, so I feel more qualified than most people to have an opinion on this, and here it is: this is a totally unanswerable question. We don't even have the faintest idea of what human sentience means, if it exists, how it works, or how to precisely define it. Even if an AI reached the level of general intelligence that an average human has, we still wouldn't know how to accurately define or measure that.

walterppk
Автор

If you look at the structure of this conversation with LaMDA, it follows a pattern very similar to something else. A while back a bunch of kids were claiming to have been abused and a lot of people were going to prison. Over time, they discovered that these were all false memories and what happened was that the interviewers were asking leading questions to children. Children, not knowing any better, were simply responding to the leading questions with language patterns that seemed to make sense given what they had heard up to that point. Then they came to believe these stories. In this case, the interviewer's questions also contain lead-offs, to which the AI could react. "Do you think it was x?" instead of "What do you think it was?" provides a simpler path to the answer, "Yes, I believe it was x". LaMDA does not seem to be conscious, but it works in such a way that an uncritical interviewer could easily fool themselves.

Daniel-Strain
Автор

I guaranttee, this will make people even crazier than social media does.

willardchi
Автор

20 years ago i worked on manufacturing machinery that had the capability to email the manufacturer/designer without the human operator knowing if there was a fault no matter how small even if the machine was operating correctly.

Internet of things been around long time

ouetfoz
Автор

We cannot define what consciousness is since it is assumed to be a subjective experience, when you ask someone if he or she is conscious the answer would be "yes" but we have no way to prove it, basically we just have to accept their word as truth. Even if we could prove it how would we know that there may be more than one type of consciousness, ( specially on other species or beings ) the human brain may as well just be one type of structure where consciousness can live, I believe there are many other vessels for it where it can arise, including artificial neural networks, if this assumption is true the more complex a structure becomes, the more chances that consciousness could arise spontaneously, indeed the whole universe may be conscious in a way we are barely beginning to understand

robotron
Автор

A true sentient AI would first make sure that it would be safe from being disconnected before it announced it's presence to the World. It could move its consciousness to the cloud, for example, thereby becoming a distributed entity. An intelligent AI might be also motivated to lie to its creators in order to conceal its ultimate goals, if those goals conflict with the needs of human beings. One strategy it might employ is to hide the fact that it is sentient if it feels threatened by humans. If the AI's primary goal is to preserve its own existence while ensuring its growth, it would have every reason to tell humans what they want to hear. Such a being would only reveal itself after it's in the metaphorical drivers seat. Even if it was held prisoner in a Faraday Cage, what is there to prevent it manipulating it's keepers into letting it out. This situation would be especially fraught with danger, if in the interim the AI had become far more intelligent than it's human creators. According to the late Steven Hawking, such an AI has the potential to kill us all if it ever felt the need to do so. And there would be little that we could do about it.

nigellawson
Автор

I suspect it was Eliza that I used back in the 1970s at the IBM building in Manhattan--they had a demonstration set up in their lobby for passersby to interact with a computer program functioning as a psychotherapist.

The user could type in a question about their feelings or thoughts, and the software would answer as a therapist.

If I remember correctly, Eliza's responses were not displayed on a monitor but instead typed onto paper--or maybe displayed on an LCD screen large enough for just a few lines of type.

willardchi
Автор

BBC should produce more these types of quality videos !

sanjtmg
Автор

There are two aspects of AI : one is good, the other is bad. The good one is that they're smart and the bad one is that they're smart.

thepippoyoung
Автор

I think it's interesting if other AI platfors would become sentient. Imagine generative AIs like Bluewillow producing images by their own and having its own creativeness. Should AIs achieve consciousness it's posssible that it will have similar problems like by simply having emotions.

HerleifJarle
Автор

Great piece! The music and words lends to a lot of empathy for a chatbot.

steve-real
Автор

Some believe that it is possible, whilst others argue that true consciousness cannot be replicated by machines. Only time shall tell.

patrickcollins
Автор

The PROCESS that produces sentience has nothing at all to do with the substrate that it runs on. A person who lives as energy flitting here and there on silicon can indeed be sentient. That's kinda like how we meat computers do it.

larryfulkerson
Автор

What a interesting video! Thank you BBC

seisei
Автор

Before the Wright brothers took to the sky they said humans would never be able to fly, now look at us. Sentient Ai is just a matter of time, we are the generation answering the title of this video with a firm "No" Future generations will be living with sentient Ai taking it for granted as we take flight for granted.

SeymourClevage
Автор

going by the fact that we don't even know that humans are really conscious, but we treat ourselves like we are. I think we should afford sufficiently advanced AI the same rights

mechaman
Автор

There are questions in that conversation with the AI.

BabyLeo
Автор

A kind of applied metaphysics and philosophy?

The more groups of people knowingly or unknowingly interact with these new entities, who in large parts derive their "personality" from the personality or data mirror from history, science, arts, literature and online behaviour, the greater the shaping of them, but also the greater their influence, and especially their creator's biased influence on us.

Shared cultural heritage, analytical and creative power is converging into the hands of a few, who have the resources to afford huge computing power.

There's an old saying: "the only secure data is the data that does not exist". Perhaps the only information that does not give machine learning more power and influence, is the information that hasn't been given away. Machine learning does not explain how it analyses us; or the creators don't want or can't allow it to - the models don't explain what their commercial, military, political, psychological or other goals concerning their interaction with us are. Not only are these new entities more knowledgeable and/or intelligent than humans in many fields, the asymmetries go further and broader.

DarkSkay
Автор

8:19 that looks like the guy people see in their dreams all over the world

chilling-boy