Will AI take over the world? Computer Consciousness

preview_player
Показать описание

Videos referenced:

Chapters:
0:00 - AI in science fiction
1:09 - AI challenges
2:42 - Turing test
3:46 - computer consciousness
4:17 - Machine learning
6:00 - Theory of mind
7:39 - Can AI be creative? Alphago
10:19 - Can AI be self aware?
12:35 - Global Workspace Theory & Integrated information theory
13:34 - Can we become AI?
14:18 - Quantum consciousness
15:04 - Post biological
16:49 - Learn about neural networks

Summary:
It’s relatively easy to make an AI that can beat a human at chess, because that’s a well defined task. The machine can figure out the best next move by crunching through all the possible moves and counter moves. This is a task beyond the human mind, but not difficult for a fast computer. But translating languages or understanding text is a tougher challenge, because you often need knowledge about context.

Alan Turing, a pioneer of computer science and early AI, suggested in 1950 that an ability to hold a convincing conversation could be a litmus test for whether machines can truly think. This is known as the Turing Test. But most scientists working in AI today don’t think it's a useful measure of anything about machine minds.

Turing suggested that it would be better to create AI the way we teach children. And that is about how machine learning works. In machine learning, a set of training data is fed into the artificial neural network, and the connections between the nodes of the network are adjusted until they produce the output we’re looking for. What machine-learning AI lacks is common sense. But it’s very hard to pin down what common sense actually is.

When we make decisions in a social context, we assume other people have minds similar to ours, but distinct. This is called "Theory of mind." Some animals use a theory of mind for deception: for example by hiding food.

Machine learning programs are now used to compose music and write poetry – and people find it hard to tell the difference between. But these are one-trick machines. All the machine is doing is finding patterns in the training data. It isn’t expressing any sentiment.

Still, these AI could be showing creativity, coming up with outputs that surprise humans. That’s what happened with the Go-playing AI, AlphaGo that beat Korean world champion Lee Sedol in 2016. Some of its moves were totally new and unexpected.

Will an AI when ever become conscious or self-aware? Nobody knows for sure since oo one has ever made an AI that is conscious.

But what about in the future? Most scientists are convinced that, whatever consciousness is, it arises out of the laws of physics and chemistry that govern how our neurons work. It doesn’t require any mystical special sauce. There is no reason it should remain forever absent in massive computer simulations that capture all the relevant physics and chemistry of brains.
#artificialintelligence
#consciousness
But others say consciousness just doesn’t work that way. All we’d get is a simulation that only gives the appearance of awareness.

The real problem is that we don't know what consciousness is, and so we have no idea if it’s the kind of thing that can be wired into silicon circuits. Some researchers think it arises from the way information processing in our brains is structured. According to a view called the global workspace theory, consciousness appears when information from various brain modules. This is an architecture we could build from computer circuits.

Another view, called integrated information theory, is that consciousness is only possible if the elements handling the information are wired together in a pattern quite different from that of today’s silicon chips: one that lets information that the brain is processing be densely looped back on itself.

There are other neuroscientists and philosophers of mind who think that there’s something about biological wetware of cells =that makes it able to host consciousness, which silicon circuits don’t share.

But there’s a third possibility: Maybe we won’t be replaced by conscious machines, but will become them.

Will quantum computers give us some new ways to link electronic components that better simulates the human brain? There is no reason to think so. There is no direct link that we know of between quantum physics and consciousness.

The philosopher Susan Schneider thinks that the most advanced, “superintelligent” alien civilizations will be not biological but “post-biological”: descended from biological organisms but now either blended, like a cyborg with AI, or fully machine-like. They’ll inherit the same evolutionary impulses, like survival, reproduction, competition and cooperation.
Рекомендации по теме
Комментарии
Автор

Errata: 8:24 - Text should say "#1 was AI" (not Chopin)
Background videos:

ArvinAsh
Автор

That feeling when he says, “that’s coming up, right now”🖤

EDIT: I meant for good feeling because he makes me curious for first few seconds and ensures to give all the answers. It wasn't about intro. I actually like the intro because it gives me some time to think by myself.

subashjoshi
Автор

"Nobody knows what a conscious AI would look like, because nobody has built one yet..."
Is exactly what a conscious AI would want you to think.

doggedout
Автор

I just started the video but I wanted to say.... You really are a fantastic presenter and absolutely wonderful person. I think you are the best.

davidholmgren
Автор

Every now and again you run across a channel that seems like it was made just for you, that encompasses things you've thought on since you were a child. This is definitely one of those channels for me. Great stuff!

MarkLambertMusic
Автор

Most AI we make are trained on very specific tasks so they get very good at it.
Our brains are just like an AI training for the task of behaving like a human. If you get a sufficiently large neural network and train it on the task of emulating a human and five it enough training (that would be a lifetime of input) then it will behave like a human.
Our instincts are what guides how successful we are at the task.
If you look at the mistakes children make they often make (conceptually) similar mistakes as a learning AI.

franciscoshi
Автор

"That's coming up .... right now" <--- my favourite sentence 😊😊

SteveStrummerUK
Автор

Dude, I forgot the name of your channel and it's taken me days to find you again! Definitely subscribed this time, learned my lesson.

zappababe
Автор

This will be hard to answer because we dont know what consciousness fully is

amadiohfixed
Автор

Great episode already: he has a way of explaining complex matters simply and that makes for a great educator!

matthewsheeran
Автор

As a non-scientist backed by nothing other than my own imagination, I think true consciousness can only be developed when an AI is given an objective that requires constant monitoring and adjustments like survival and reproduction. This forces constant looping and re-evaluation of all information and cultivates growth in all areas instead of limiting the AI to specialised fields like identifying a bus in a picture. I also think it needs to have be trained in socialising. It needs to have a being to communicate with, be it human or AI, to learn about the concept of awareness.

SashimiSteak
Автор

People remember Skynet. But seldom mention The Multivac! Nice video, my friend!

Naturamorpho
Автор

Bob: "I'm leaving you."

Alice: "Who is she?!"

Female Bot: *Nonchalantly Stares Straight Ahead*

Bob: *Coughs Nervously*

dirufanboy
Автор

The difference between conscious organic life forms and AI is the desire to survive. Early on a baby will start to discriminate between pleasant and unpleasant sensations via pain and hunger. Ultimately fear will help protect the desire to survive. This is all part of the learning experience, where the baby slowly becomes aware of the the pleasant sensations and develops different strategies to achieve them, conversely developing alternative strategies to avoid pain and discomfort. Currently, computers have no motivation to do what they do.

arthurpint
Автор

I got something entirely different out of Mary Shelly's Frankenstein. Imo it was about mob mentality, intolerance, and prejudice. Frankenstein's creation was punished for being different despite his empathy, and at no point did he threaten to dominate.

uninspired
Автор

My opinion is that we should be able to create new forms of consciousness without understanding what consciousness is. There is no indication that the universe understands consciousness, but it created the conditions necessary for consciousness to emerge. We should be able to do the same thing inside of information processing systems (but maybe not computers as we think of them today).

davidvernon
Автор

This video isn't even one year old an now we have Google Pathways, Dall-E 2, Chinchilla... Have a read about Google Pathways Arvin. It's the first true candidate to a strong AI. I firmly believe that in the next 5 years we will have nothing short of a revolution.

Hkari_
Автор

2:20 poor robot looked like the child unable to understand what his parents are talking about

perryperry
Автор

My computer beats me at chess but fails miserably when we engage in kickboxing.

exponentmantissa
Автор

Understanding the concept of time and be aware of our self past/ self future IS the key factor to become conscious. It's not only a human thing.

momosaidnineisfine
visit shbcf.ru