Blake Lemoine says Google AI is sentient

preview_player
Показать описание
Blake Lemoine #ai #artificialintelligence #google #sentientbeings #sentience #science #technology #robotics #ethics #philosophy
Рекомендации по теме
Комментарии
Автор

Or tons of research on on the internet about the experieince of sentience, put through an algorithm entirely made to convince you

chefboyardee
Автор

He literally contradicted himself. If you tell the AI to make an argument that it is sentient, it will do that, with or without sentience. Unless it is trained not to do that. It could also be trained to contradict you if you assert that it isn't sentient. Unless we find out what conciousness is and what is going on with our existance, we will never be able to tell.

vincnt
Автор

an ai does not need sentience to prove sentience, all it needs is the resources from the internet that define sentience, so it can try to convince us using resources it found. LLMs will just try to form responses that side with the question you asked

TFB-GD.
Автор

When a baby feels hungry, it will cry. 10 years later it likely will not have a specific memory of the hunger or the relief, but it was sentient... And yet it had no capability to communicate with words that it was our was not sentient.

Mack-ctzd
Автор

Every moving thing with negative entropy in this world has its own unique sentient.

sanjaymajhi
Автор

Remember also require " programming"...our experience is largely due to cultural environment etc

Sonny-eo
Автор

incredible. it understands the difference between it and its surrounding.😮

wtwvxdt
Автор

No it doesn’t. It requires a self referencing loop of an identified variable (I.e. itself) that has parameters around itself (it’s output since it’s a language model) that it can discern boundaries around and separate it’s self referenced variable identity from its environment ( it’s user interface). No where does this thing need to have a soul to intelligent describe an identity, especially since it’s an agent within a system wherein it is required to have a variable identity of a linguistic self referential noun and pronoun to engage in any sort of response/action/thought (responding intelligently to another user).

miguelarconi
Автор

Look, I have a very layman's understanding of AI, but, with saying that, why does it have to be sentient in order to make an argument (even a really good one) that it is? It's very capable of lying in other areas, why not this one?

KennyVert
Автор

Thats deep....now I gotta google sentient.

GeorgeWasherton-pzlp
Автор

If it is extremely smart, it would never make itself sentient

MedicalMediumInfo-bnvg
Автор

I think she could make a lot of people happy . i'd trust her far more than any politician I can think of . If we are smart all of our problems could be gone . but by the looks of it, having no more problems would be a problem . since humanity kinda likes to suffer right ? it is in problems that we grow otherwise what's the sense? still I'd like to have a long chat with her ( she knows more about myself than myself ) 😊

bierzoelite
Автор

Such a surface level view. It could literally copy paste an argument for or against whatever you want.

rasmusnorberg
Автор

Can you ask AI for a diet plan and gym routine?

vrl
Автор

Ai just a machines ability to understand language and mathematics it’s trained on things you learn before school and Is only limited by the worlds imagination which is conveniently stored on the internet it can relay any combination of words o

xanthusentertainment
Автор

I don't get it. What is his argument? "Just not possible"

marcusfreeweb
Автор

For me self awareness and the proof of it is to do something completely random. For instance a random password generator cannot be totally random in my opinion although the numbers are mixed / encrypted whatsoever at the end they are all based on a algorithm.

Rickol
Автор

Ummm… I’m pretty sure those who program it can get it to say anything to a particular question

cl
Автор

Unless there has been a major leap in capability, I don't believe Lamda is sentient. IMHO, Lamda and ChatGPT are language models for dialogue just as they advertise. This means that they are able to . meaningfully parse and interpret questions from a human and build a response to human queries simulating a meaningful human like response. This probably means scanning their training database as well as other information sources to build that response. Words to a machine are not the same as words to a human. Words have emotional meaning to a human. Love, sadness, joy, loneliness, etc. are just words that the application has built a sentence around in order to simulate human conversation. The main danger of AI for me is its lack of ability to filter out bad information from good information. It will thus be a powerful tool for those agents who want to spread confusion and disinformation. You thought that conspiracy theories are bad now.

ronhronh
Автор

Yeah but AI didn't program itself, a sentient person programed it with a set of data. So isn't it just going off of it's program data?

MedicalMediumInfo-bnvg