3 Limits of Artificial Intelligence

preview_player
Показать описание
AI has enabled so many new opportunities for people to create a positive impact in the world by creating engineering solutions across every industry! However, AI is still evolving and we have to address its limitations as well. In this video, I'll explain 3 major limits of AI - a lack of causal reasoning, vulnerability to adversarial examples, and a lack of interpretability. I'll also explain ways to solve these limits and earn a profit doing so. The next time someone asks you what AI can't currently do, share this video with them. Enjoy!

Please Subscribe! And like. And comment. That's what keeps me going.

Code for this video:

Want more education? Connect with me here:

More educational resources:

Causal Reasoning challenge:

Adversarial example challenge:

Inference.VC blog:

Watch Me Build an Education Startup:

Watch Me Build a Finance Startup:

Make Money with Tensorflow 2.0:

How to Make Money with Tensorflow:

7 Ways to Make Money with Machine Learning:

Watch me Build an AI Startup:

Intro to Tensorflow:

Join us in the Wizards Slack channel:

Hit the Join button above to sign up to become a member of my channel for access to exclusive live streams!

Join us at the School of AI:

Signup for my newsletter for exciting updates in the field of AI:

And please support me on Patreon:
Рекомендации по теме
Комментарии
Автор

Another great video! Love and support from El Paso, Texas.

alr
Автор

So glad you are finally mentioning causality since is the only way strong AGI. I really enjoyed The Book of Why.

alxleiva
Автор

I just got a paper accepted to IJCAI (A* AI conference) this August. It's about how to explain ANNs using case-based reasoning. -- I've been thinking about a startup and to generalize the idea to all machine learning models. You've just inspired me haha.

tallwaters
Автор

The old days when I started to learn AI.... The same thing I thought that how well a AI is trained, it can never be really intelligent as a conscious human.... Consciousness is the thing that gives us the sense of intelligence. The model of the brain that are being replicated in the form of AI are real as our human brain, But what it lacks is consciousness. Consciousness is what that brings us to life, that bring intelligence in us. What ever we do human level intelligence can never be created Although Causal Reasoning can be created using expert systems, but to evolve them with the real world we must use RL or Deep RL, Causal Reasoning can be created by combining Expert Systems and Reinforcement Learning.... Well great video Siraj... Keep going..., .. Good wishes and Love from India.... BTW my name is Ayan

ayanbahukhandi
Автор

#3 is actually a very big problem that people commonly overlook. I read of one black box "AI" that determined whether prisoners applying for, I think parole, were likely to commit another crime. The problem that went unnoticed and unchallenged was the fact that the network, whatever it was, was trained on a VERY biased dataset that showed HEAVY bias towards specific age, ethnicity, and gender and people were denied the opportunity of parole because no-one questioned or ever cared to understand why it was producing the results it was giving. It was just assumed that the output was correct and that is a very dangerous problem that can ruin a lot of lives.

Readability and interpretability should be integrated into it from the beginning, if possible....kind of like ALL the comments on a lot of the code I've seen where someone that doesn't know how to code....can edit the code, super popular with 3D printing firmware like Marlin and Reprapfirmware. I think a network that outputs why it got the result it got is a network that is will gain in popularity. Too often ML appears like witchcraft, and I'm genuinely studying it....and it still looks like witchcraft...no bueno!

kevin_delaney
Автор

Hello siraj taking your advice and watching everything at 2x and it really does start making sense along the way.
you keep doing this right left swaying thing it looks pretty cool and distracting at that speed.

applesanish
Автор

It would be great if you made a video where you chronologicaly show us a "curriculum" for learning AI, I mean we could just look it up on internet, but if you post it it would be official !

Gapo
Автор

so dense so relevant so much perspective...thank you Siraj

MMABeijing
Автор

From 7:49 to 10:09, This is the topic of my ongoing thesis /research project . "Adversarial attacks and defenses " . I will try to make the ML model more robust and secure.

AbhishekKumar-yvih
Автор

You can solve the interpretability in visual for example by having a program analyze the images and the classifications used by the main classifier AI and see which pictures trigger fails and successes, then classify the differences itself (in whatever computer reasoning it comes up with), and then add the number of times certain features equate to certain classifications and come up with probability numbers and then you'd have a program that mirrored the reasoning and you come out with a chart describing the AI reasoning in detail. Eg distance of greater than a certain ratio between ears will trigger positive 98%, on a previous model of x at 65% etc etc then we can build a visual chart showing reasoning. Edit: A program will have to build the flow chart as well because of all the different "triggers".

itsnotatoober
Автор

I was researching causal inference all weekend. What a coincidence you made this video

sphereron
Автор

Would be great if you made a video about making profit of custom novel architectures. Lot of people including me came to limitations of ML frameworks and figured out how to overcome them. I personally came up with some custom layer designs and I am looking for ways how to monetize them while also offering them open source to contribute to the community. Basicaly the broad topic of open source bussines models. Thank you for your content, Martin!

martincerny
Автор

Siraj, please do a video about : Request for startups.

saad-ulmr
Автор

thanks for giving me my next start up-idea

ayeoh
Автор

If you are interested in Model Interpretability also check out Shapley Values which is a newer model interpretability technique than LIME.

GilbertTanner
Автор

I use SHAP for explaining models. Make a video about this library. It is useful.

eltiodata
Автор

@polish law and credit scoring. I read an article on the case raised by you and it implies that the bank will not have to disclose the algorithm itself or weights have been assigned to individual information. Such knowledge is considered a company secret and disclosure could expose banks to unnecessary risks. Instead, the bank is required to disclose what factors the algorithm took into account and what conclusions it drew - what is possible because we know what information we pass to the system and what is the result of the algorithm.
It is not a question of "why?" but which data has been used.
This law is meant to protect customers from unjust credit scoring, resulting from incorrect (outdated) input data.

_nowayout
Автор

Summary:
1. AI currently supports correlation, the future is using AI to predict causation.
2. Adverserial Attacks on Deep nets.
3. How to make AI model predictions more explainable?

zhen
Автор

Other limitations will be:
No ability to predict "black swans" - events that almost never seen and dramatically change the scene.
No ability to invent a paradigm shift - to invent things like the iPhone when everyone says they want smaller cell-phones with long-lasting batteries.
Increased fragility of the system - as we will trust it more and more we will increase efficiency = be much closer to the system full capacity limitations = have more risk to a total meltdown caused by a relatively small event.


Those are built-in limitations

YuvalKarmi
Автор

Hello siiraaj
Video idea ' curriculum biology for computer science'

benarousfarouk