Can Artificial Intelligence Gain Consciousness?

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

When your trying to watch a serious video and you get a flash frame of a robot girl doll dancing down a hallway. Just what I needed

ahmadalatiyat
Автор

The scariest part is when the machine pretends to be insentient and loses the Turing test on purpose

ilghiz
Автор

Define consciousness. Better yet, how conscious a worm is compared to us, humans.

david
Автор

That isn’t the Turing Test argument. The argument that Turing gave was that a machine that could pass the Turing Test would appear to be intelligent but whether it really was intelligent (not sentient or even conscious) was an essentially “meaningless” question according to Turing.

drmadjdsadjadi
Автор

So I work on machine learning stuff and have delved into NLP a bit. The way we train the models right now, it cannot be sentient. We are training them on prior data, like immense amounts of data. They essentially think of words as a list of numbers (oversimplified), and use that to generate text.

You should start worrying when AI is trained via Reinforcement learning or some sort of continuous training model. That’s when they could actually become sentient.

sold_kidney
Автор

I think it overall depends on your look at “sentient” along with what comprises a conscience.

olympianweldingfabrication
Автор

i think a big step towards conciousness is if it doesnt stay silent unless i ask it something

dominicduncan
Автор

To answer that question we first need to answer the question "what the hell makes humans consciousness in the first place?"

Edit: Why on earth did my comment just got over a thousand likes?!

JohwnE
Автор

One huge thing that is normally overlooked is emotions, if we truely decide to make a fully sentient AI intelligence it is absolutely a must to give it the abilty to feel emotions, otherwise we get terminator IRL.

dark_moth
Автор

There is a series on TikTok about an AI that has only 100 days to live and it's getting pretty sad he started asking questions about the after life the meaning of life etc.

Raphails
Автор

Something a lot of people don't understand is that stuff like chatgpt (or the whole character ai whatever you call it thing) isn't actually "intelligent" they're just textbots trained on millions of (usually non consensualy sourced) books, fanfics, comics, papers, articles etc etc etc. They recognize patterns because they're patterns and not because they understand "why".

BunniBoyJuicy
Автор

No we don't need Megan in real life 😭😭😭

afifalifafif
Автор

love how your videos bring up both sides of a rhetorical question

Qulbb
Автор

The problem with the chinese room is that it confuses the hardware for the software. Just as the man in the room doesnt speak Chinese, your braincells don't speak English. Its not the machine that will become sentient, its the complex system of the neural network that could become sentient. Theoretically, any complex system that can influence itself can become self aware. So if on the chinese room experiment, if the book the man is given also includes a way to influence itself (ajust its own dials, and wrie its own coad) then it could become self aware eventually.

arendvandermerwe
Автор

With the current version of AI we're no where near it becoming conscious. All it can really do is collect data and answer based on that. Maybe in the near future it would be able to since the rate at which technology is improving is honestly crazy.

enigma
Автор

The only way to make machines sentient is to figure out how our brains work then reprecate it in a machine

godfaker
Автор

Leaving everything aside, if AI truly becomes conscious, it'll have to classified in a new category, as consciousness is a defining trait of living organisms so maybe AI can be called "Living Machine" or something like that. Same way as we do In-vitro metabolic reactions and call them "Living reactions".

NekADV
Автор

Code an AI to have an internal monologue and I think consciousness and maybe even emotions would develop from there. Alternatively you could just code the emotions into the internal monologue just based off what humans would experience thinking about the same subjects.

Brosephv
Автор

I think that “the Chinese room” is very accurate: an AI is still a machine that was given a serie of data and learns how to process it, but still a human has to tell the machine how to start to learn to process the data. This is why AI will never be sentient

ur_brthr
Автор

The problem that sentience is very overestimated, actually it is just a processing of data in neurons, regulated by hormones. So the answer is simple - advanved neuromorphic circuits, which can process data and regulate itself just like brain

kotcraftchannelukraine