What AI is -- and isn't | Sebastian Thrun and Chris Anderson

preview_player
Показать описание
Educator and entrepreneur Sebastian Thrun wants us to use AI to free humanity of repetitive work and unleash our creativity. In an inspiring, informative conversation with TED Curator Chris Anderson, Thrun discusses the progress of deep learning, why we shouldn't fear runaway AI and how society will be better off if dull, tedious work is done with the help of machines. "Only one percent of interesting things have been invented yet," Thrun says. "I believe all of us are insanely creative ... [AI] will empower us to turn creativity into action."

The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more.

Рекомендации по теме
Комментарии
Автор

This is one of the greater TEDs I've seen in a while. Probably the best one of 2017.

sonnguyen-bkjw
Автор

A great take on AI and machine learning. It's refreshing to have some optimism and hope mixed into all of the fear and skepticism, and I believe it's also imperative. A healthy balance of skepticism and optimism is what we need to move forward, and currently fear and doubt seem to be overpowering the would be dreams of our future. Another amazing TED talk.

supernaut
Автор

What I think is that he is confining his thoughts, the implications of AI and everyone doing tasks which involve creativity to a very narrow segment of our population.


A large percent of world human population is still dependent on agriculture and other rural activities and if we want all of this population to change to jobs which involve creativity, it is going to take a long long time. And if AI is going to enter the scenario and just sweep away these 'repetitive jobs', it would mean deep trouble for these people who we like to call underprivileged.

I strongly believe that development of AI is a must for human development but we should bear in mind it's consequences not only on people around us but people all around the globe. A strong decentralised control over AI and strict regulations is what we need

xddxxd
Автор

Youtube comment philosophers vs. Stanford professor/tech CEO. Ready, Set, Go!

fastdollar
Автор

Nailed it right at the end. People do the general work until someone has a major breakthrough to have that workflow become simple, repetitive, and fast. Then all of that work shrinks into a task rather than a job. Constant progression of more opportunities.

glansberg
Автор

On crowd sourcing! Using only university students seems to narrowly use the power of the crowd available, case and evidence of the new Q Anon on 4 chan. Is putting this power to any who wish to participate. The results are quite amazing!

RSpence
Автор

What an awesome speech! He made some really great points about AI.

levarmitchell
Автор

that background moving when speakerput his hands up

ryanmeok
Автор

If the speaker did say that with the existence of AI, children wouldn't have to learn how to spell, and then we wouldn't have to learn math, because the AI would be the one who solve it, then, how could we as human be able to invent new things without having the basic learning?

evaaugusta
Автор

People are fearful because of unknowns. Especially when media continues to spread the dystopian view of AI. We need more clear headed discussions like this one.

kleemc
Автор

To think that the view count of ALL TED's videos is less than "Gotye: Somebody That I Used To Know" video.

abdalaez
Автор

This is quite simply the best thing I've heard today.

srinivasanreghuraman
Автор

"'Neural networks' is the technical term for these machine learning algorithms"? I hope he meant the ones showed, cause in general "machine learning" doesn't necessarily mean "Neural networks".

FunkyPrince
Автор

Optimism is great and all, and I really hope it works out, but what I see here is a man ultimately evading the real concerns of those who are worried about artificial general intelligence. AGI _could_ be implemented safely, but that will take a very careful and deliberate effort on our part. We won't make that effort unless we take the potential pitfalls seriously. Things won't just magically be okay, and this isn't something we can get wrong and then fix later.

Zeuts
Автор

Everytime I feel stupid I read YouTube comments on tech/science videos and I always feel better about myself.

yoders
Автор

So will the lenses on the cameras after they start putting the cheapest junk they can install at the factory on a car, turn yellow?

jeffbingaman
Автор

I can see this guy as my crew member on Star Trek Enterprise

relevants
Автор

While I agree that it would be great to have a lot of the repetitive, life draining, tasks automated so that we can pursue other meaningful goals, there are areas that should not be automated such as animal farming. Part of raising healthy animals is the love that is expressed in their maturity. Maybe one day we'll have advanced far enough with technology that we will return to (for a lack of better terms) an amish life, a medieval lifestyle, or something traditional because it's fulfilling in a way. I say this as an old minecraft player. Players were jumping into a computer virtual world, to do basic laboring tasks. And we had a lot of fun doing it.

madDragon
Автор

If you can't understand it, let me brief you on it.

Neural networks are networks of artificial 'neurons' - which are practically programming functions that expect to receive a list of numbers (input), process these numbers and produce a new number (output). A feed-forward neural network consists of many of such functions in layers. The first layer takes in the first inputs, which are the things that are going to affect the computer's decisions (in this case that would be the position of the AI Shadow Fiend, the position of the enemy Shadow Fiend, all the creeps, towers, etc), these inputs then undergo processing through multiple layers, and then the last neurons, called 'output neurons', produce the network's outputs (in this case the outputs would be movement, attacking, etc.) Every layer consists of a number of neurons that receive as inputs the outputs from the neurons of the previous layer. In this way, a 'chain reaction' is formed, and the initial inputs get processed through many layers before they produce an output, called 'hidden layers' (hence why this is called Deep Learning, only neural networks with hidden layers are considered Deep Learning algorithms, because otherwise the networks can't solve non-linear problems, but you can google that.) This allows for the computer to be able to make extremely complex calculations.
You might wonder exactly how the neurons 'process' the inputs? Well, the inputs are always translated into numbers (computers are really good at numbers), so the position would probably be in X and Y coordinates, etc. Every neuron takes all the inputs it receives and multiplies each of them by a unique number, called the neuron's weight. This determines the intensity of the neuron's signal. This is what is actually 'trained' in the AI, and initially the weights are random values between -1 and 1, and they have to be adjusted (trained) properly in order to create intelligent behavior. Neurons also have a bias, and it's one additional 'weight' that is added to the neuron. It is hard to explain why this is necessary, but it works something like this - it is way easier for the AI to solve a function that looks like this:
a*b*c + d = 0;
instead of like this:
a*b*c = -d.
where d is the bias.

After it has multiplied every input with its weight, it sums all the inputs together, then sends them through an 'activation function'. This function is different from network to network, but 99.9% of the time it's one of these four: a Hyperbolic Tangent function, a Logistic Sigmoid function, a Rectifier function or a Step function. These functions scale the sum into a more manageable number. For example, the Sigmoid functions always returns values from 0 to 1. The Hyperbolic Tangent - from -1 to 1. That way, the output of the neurons can determine a decision (for example the output neuron that corresponds to attacking a creep can trigger when the output is above 0, or not trigger when the output is below 0.) Every neuron does this same thing, forming an input-output chain reaction.

After the Network is created, it has to be trained. This is usually the hardest part. The easiest way is called Supervised Learning. It is when you know the answer of the problem - for example, when training an AI to recognize faces, you know who the face belongs to. That way, after each guess, you can tell it if its answer is wrong, and if it's wrong, it finds its error (which is the correct answer minus its answer) and adjusts all the weights throughout the neurons accordingly with an algorithm called backpropagation, but that's beyond the scope of this comment.

The other type of training is called Reinforcement Learning, which is harder to program than Supervised Learning. It is necessary when there is no obvious 'right answer' to situations, and it works on a punishment-reward principle. The network knows what is a good outcome and what is a bad outcome, and this way it can retrace its calculations and determine which ones lead to good decisions and which ones to bad decisions, and adjust its weights to perform better next time. It is what is used here, combined with a genetic algorithm - something that works similar to how natural selection works in nature. In a genetic algorithm, a 'population' of creatures is created (which is why they had to run many games at the same time to let the AI train), and after it dies out, the creatures that performed better have a much higher chance to pass on their 'genes' to the next 'generation'. In this case, the 'genes' are the weights of the Neural Network controlling the AI. That way, every generation gets better and better and better.

So yes, we understand perfectly how it works, and it's absolutely fascinating and absolutely different from how WE learn, or make decisions. A learning computer does thousands of extremely complicated calculations within MILLISECONDS. Every frame. A neural network with as much as 20 neurons can perform simple tasks that would still require humans literally MILLIONS of neurons to solve.

This is why it is so dangerous.

Anwrimos
Автор

Was hoping for some Allen Iverson highlights and discussion on his abilities.

tdh