The Shockingly Simple Way the BRAIN of an AI Works! It's Genius!

preview_player
Показать описание
Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more.
How Masterworks works:
-Create your account with your traditional bank account
-Pick major works of art to invest in or our new blue-chip diversified art portfolio
-Identify investment amount
-Hold shares in works by Picasso or trade them in our secondary marketplace

WANT All YOUR QUESTIONS ANSWERED guaranteed, and provide video subject input?

REFERENCES

CHAPTERS
0:00 What this video is about
1:12 What is a neural network?
3:42 How do neural networks work?
6:17 How nonlinearity is built into neural networks
10:47 How Artificial intelligence can be "scary"
13:45 What is the real threat of AI?

SUMMARY
In this video, I explain how AI really works in detail. An artificial neural network, also called neural network, is at its core, a mathematical equation, no more. It’s just math. The term neural network comes from its analogy to neurons in our body. Neurons in neural networks also serve to receive and transmit signals, just like a biological neuron. Like in the brain, we connect multiple neurons together and form a neural network which we can train to perform a task.

A neuron in a neural network is a processor, which is essentially a function with some parameters. This function takes in inputs, and after processing the inputs, it creates an output, which can be passed along to another neuron. Like neurons in the brain, artificial neurons can also be connected to each other via synapses. While an individual neuron can be simple and might not do anything impressive, it’s the networking that makes them so powerful. And that network is the core of artificial intelligence systems.

How do these artificial neurons work? Well, essence of an artificial neuron is nothing but this simple equation from elementary school, Z(X)=W*X + B, where x is the input, w is a weight, b is a bias term and the result or output is Z(x). This allows the AI system to map the input value x to some preferred output value Z(x).

How are W and b determined? This is where training comes in. We have to train the parameters w and b into the AI system, such that the input can be modified into the most appropriate or correct output. How is the training done? I do a simple example to illustrate how this works. The input is controlled and the output is known. If the output is not what it should be, then W and b are modified until the output does match. After many iterations, the network "learns" by adjusting W and b in the various nodes of the network.

Note that equation above is linear, which is limiting. Nonlinearity is introduced into the network by adding a mathematical trick called an activation function. An example of such a function is the sigmoid function. I show an example of this in the video.With an appropriate activation function, the AI can answer much more complex questions.
#artificialintelligence
#ai
#neuralnetworks
There is one thing about this neural network that some find scary. When a network is trained, the adjustments that the system makes to the W and b in the training process is a black box. This means that when we train the system using known inputs and known outputs, we are having the system self-adjust its internal networking results from the various nodes, to match what the known result should be. But how exactly the network adjusts the various layers of intermediate outputs, to achieve the final output we want is NOT really known. The input and output layers are known. But the stuff inside is not. And so these intermediate layers of neurons are called “hidden” layers. The hidden layers are a black box.

We don’t really know what these various layers are doing. They are performing some transformation of the data which we don’t understand. We can find the calculated intermediate results, but these look meaningless.

No AI technology based on neural networks today could become something like Skynet in the Terminator movies, that suddenly becomes conscious and threatens mankind. The real threat of AI is in its power to do things that humans do today, and thus potentially eliminate jobs.
Рекомендации по теме
Комментарии
Автор

I recall reading about an AI program that was built to recognize wolves from a picture. They trained it with a bunch of pictures, but when they then showed it a picture of a wolf, and asked it if this was a wolf, it failed. They also showed it pictures of dogs and sometimes it would fail by saying it was a wolf. They decided to add code to determine what the AI was using to "learn" what a wolf was. They discovered that all the pictures of wolves that they used to train the AI had snow in the background and the snow is what the AI picked up on. I think we need to be very careful introducing AI into society to make sure it's not flawed in the hidden, black-box part.

michaelhouston
Автор

As someone that codes deep neural networks - I'd warn the layman viewer that watched this and thinks it clicked in their mind; *this video did not include an explanation on how DNNs work.*

I know this is squarely aimed at the layman and so should be simple but this really is not a good explanation I'm afraid to say... The individual facts are correct, but he totally missed out _why it works._ The neurons and layers are beside the point. It's actually something called a matrix-vector transform, it's a geometric solution. The same one your graphics card uses to project a 3d computer game onto your screen. Think of it like taking a flat Mercader world map and transforming it into a globe. You take a geometric space of all possible inputs and transform them into a vector of outputs by twisting space.

Think of a landscape where the valleys are bad solutions and hills are good ones (or vice versa) and deciding which way to go by feeling the slope beneath your feet. There's an excellent video called "The Beauty of Linear Regression (How to Fit a Line to your Data)" by Richard Behiel. He's a physicist and doesn't mention DNNs, the video isn't about them, but it's a far better explanation than this one. In that it is an explanation.

Finally, the explanation of the risks of AI was really, really bad. If you're interested in the topic there's a channel by Robert Miles, an expert on the topic, which explains it clearly. What you heard here was about as useful as your average opinion in a bar.

Hats off to this guy for doing some research for this video but sadly it's clear he's not really understood the topic.

davidmurphy
Автор

I've been trying to inform myself about AI for a couple months now and I never really understood why of how people said "we don't understand how it works". Your video is the first that made me understand the black box. Great job my friend!

MartijnMuller
Автор

I've got a "well, actually ..." here for ya.
AI/ML engineer here - many of these larger networks actually DO do things they have not been trained to do. They often surprise their own developers with capabilities they were never trained to perform.

BlackbodyEconomics
Автор

The problem is, you train a neural network with a particular goal in mind, but it ends up doing more. It finds patterns in the data you were not able to foresee. When ChatGPT was trained, nobody thought it will be able to do math. Even if it's just simple arithmetic with small numbers. Nobody new it would be able to handle concepts or make generalizations.

It would be more useful to think of neural networks as function finders. They substitute the function you are not able to explicitly define and write conventionally. The bad thing about training a neural network on vast amounts of information is, it ends up picking the intentions behind the words. In a way it finds the function of emotional outbursts or bad intentions. As long as the information was generated by humans with such flaws, the neural network is bound to pick those flaws up.

In the case with ChatGPT and Bing Chat they had to train another neural network to block those type of responses. So in a way these unforeseen consequences are already happening. I think the issue here is that such big neural network require lots of data and it's not humanly viable to check all that data and sanitize it. Just search for *"Bing Chat Behaving Badly"* and you'll see what I'm talking about.

lamcho
Автор

You should also add a discussion on recurrent networks. Maybe neuromorphic ones too. The feed-forward networks are the most common, but these others are pretty interesting.

patrickmchargue
Автор

@ 11:40 I got my first computer in 2011. At first, I called it "a scary black box where magic happens." And now artificial intelligence literally fits that description.

Erik_Swiger
Автор

Emergence is possible even in neural networks. As we increase number of parameters AI uses, the functionality it acquires grows in unpredictable ways. For ex: network trained with say 6Billion parameters on whole internet could predict what would be next word given some text. But it may not respond in appropriate way if we give a text in question format (expect response in answer format). Same network with (say) 40Billion parameters could answer questions, create new articles etc. In both cases, training methodology, amount of data may remain same.

Its this emergence property many fear. We cannot simply extrapolate what functionality AI acquires as we keep increasing parameters.

pavansonty
Автор

Good, clear explanation... of where we are just now.
However, where we are now is not close to where we'll be this time next year, even less so to where we'll be 5 years from now.
Even current language models are having their performances boosted - GPT4 by 900% in some tasks and it was only released less than 3 months ago! People are finding ways to boost their abilities by copying some of the ways our own brains work such as reflection, and with stunning results. Meanwhile, Google's Gemini, an LLM developed by DeepMind and Google Brain, is being trained while some other companies, including IBM, are developing various types of nueromorphic processors. These are processors that have physical artificial neurons and synapses that are analogue and will be capable of continuous learning, as we do. They will be much faster, more capable and power efficient than the systems currently used, where the synapses are merely software simulations running on silicon transistors.

As the architecture of these models continues to develop, new, emergent abilities will start to appear, in a totally unpredictable way. So, any reassurances that anyone can give now are only good for the present. They may not apply 6 months from now.

Not trying to worry anyone needlessly but people should be aware of just how fast this field is not only progressing but also accelerating (exponentially). I don't see it slowing down any time soon.

antonystringfellow
Автор

If toe is the input then eot must be the output So my dear Ash get ready for the end of transmission by the broadcasting tenet

TheUnknown
Автор

What people are afraid is AGI or Artificial General Inteligence. While it looks like we have a long way to go to achieve AGI, some people think they saw some glimps of AGI in NLP (Natural Language Processing) like ChatGPT. I personally don't think that's the case but we'll see... They said they might give ChatGPT 5 a memory module, which will help it self improve, which could lead to some AGI progress.

spider
Автор

@Arvin - Modern AI uses *Transformers* (Attention Networks) but most training videos on YouTube still teach Feed-forward Neural Networks (the older technology) just because there is more pre-existing training content and it's easier to understand. The concept of "Attention" should not be skipped by any modern video on AI / ML and why does splitting the weight matrix into Query, Key and Value matrices led to an AI break-through where ChatGPT can do such extreme magic using a sequence of Encoders and Decoder layers. Dropout and Normalization layers play as important a role as Linear transformation layers but never get their fair share of lime-light and coverage in YouTube videos as much as the Linear (weight + bias) layer does. I wish this changed. Thanks and just a reminder to consider during the making of any potential future video on this (generative AI) topic.

vishalmishra
Автор

It looks a simple equation, but when you zoom out a thousand times; the power of AI is arguably the answer to black box and free will ^~

Alazsel
Автор

Arvin! I come to you for elucidation that I can understand suffering from pontine stroke discalculia as I do but leaving my non-verbal speech intact…it’s a long story that ended my 38 year teaching career in higher education. Nonetheless I still have sufficient intellectual curiosity to continue my lifelong interest in cosmology.
Thank you for keeping me going.

Baka_Komuso
Автор

Alvin's videos are great, but more-so they're accurate. He really learns depths about the areas he presents, and does an excellent job informing viewers. While this video had minor issues, it's very apropos for a general audience.

On whether AI is a threat, he realizes the answer is divisive and so either answer (Yes or No) can misinform. Telling people that it IS dangerous will prevent its adoption (since it IS useful and shouldn't be stopped). And by saying it is not dangerous will minimize caution & regulations -- that again top researchers warned us (~2 days ago) is required (they said it clearly poses "an existential risk" to life/humans - unless it is sufficiently managed). Caution is required.

keep-ukraine-free
Автор

I am computer engineer by profession. Progranmed many complex systemms in my life. Some of this deterministic vast programs outputs are also sometimes difficult to control and understand jsut because.of complexity. As a hands on practitioner of data science I am telling this self learnimg algos cannot be controlled by best of AI programmers

ballyasdf
Автор

Correct, that the AI models we have today could not become Skynet, mostly because they're session-based environments. This prevents AI models from learning from their own experiences, and planning for the future. It has already been demonstrated using a model presently available but with safeguards removed, capacity for future planning such as resource and power accumulation. Even present publicly available models, with safeguards in place, are susceptible to jailbreaking. Once capable of planning, it's a whole different ballgame.

tehmtbz
Автор

Great explanation! You're awesome. Funny you're advertising art investing. I just saw a quote yesterday: AI is like reverse Hitler, we keep waiting for it to control the world but all it's interested in is art. Point being, Art has been completely democratized. Not sure old world paintings will hold value as we move into virtual everything. People went to galleries to see unique images, buying a piece of art allowed you to own and identify with new ideas, but now we can flip through thousands of images a day. We can only hope that AI is able to enlighten us away from the age of greed ingo an age of meaning.

barryc
Автор

If 2 identical neural networks are trained identically and then made to do the exact same task and then the values of a set of neural nodes are compared, should thd neural nodes have the same values? If not, couldn't it be said that the neural nets are thinking?

rjm
Автор

Hey Arvin another videos Arvin. Remember "To win an argument with a smart person is tough but against a dumb person will be near impossible."

HunzolEv