AI, Machine Learning, Deep Learning and Generative AI Explained

preview_player
Показать описание

Join Jeff Crume as he dives into the distinctions between Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and Foundation Models and how these technologies have evolved over time. He also explores the latest advancements in Generative AI, including large language models, chatbots, and deepfakes - and clarifies common misconceptions, simplifies complex concepts, and discusses the impact these technologies have on various fields.

Рекомендации по теме
Комментарии
Автор

"Back when I was an undergrad, riding my dinosaur to class" 🤣🤣

SuperCassio
Автор

I never thought I would dive this deep into AI. I am so proud of myself and all those who are taking this serious. So many people tell me learning all these technologies are a waste of time and I can't understand how some people don't think AI is apart of us for good and has been for awhile now. This video was great

thickivicki
Автор

I really love how you explain things. Thanks.

techwithjesus
Автор

Jeff Crume always breaks it down! Excellent analogy between music and LLMs. They don’t just regurgitate information just as new music is not simply regurgitated notes.

moderncontemplative
Автор

Jeff Krume and Maya Murad are the best

jorgesanabria
Автор

Thanks for the video. Very informative. Clear and concise :)

naskar
Автор

More on large language models connected to Cybersecurity 😊

lovewinseveryday
Автор

I really appreciate his level of making this crystal clear with his explanation.

sivareddy
Автор

A useful and welcome contribution - always great to try and agree the language of the world of AI hype - especially if we're trying to deliver value in Enterprises through 'AI'. Think this webinar might miss a key part- all those really useful ML models that aren't Deep Learning (categorization, time series forecasting, anomaly detection etc - that work on governed 'structured data'.

thomasdave
Автор

Clear and easy to understand the differences between them, Thank you so much.

NagendrababuLakavathu
Автор

Thank you for sharing; great content, analogies, and explanation.

skffingtonai
Автор

Thank you for sharing in a way that is easy to digest.

davidbeaumont
Автор

For simplicity, could you say ‘DL’ == ‘Multi-layered ML’?

maikvanrossum
Автор

Thank you so much!!! Great video.
I wonder how AIs know they are AIs. They know how they learn and understand what they know so as to be able to explain something at very different levels.
Kind regards from a 67 y.o. retired woman from Argentina.

alimuchenik
Автор

One of the Best & Finest explanation 🤩👌👌👌even a 4th grade kid would relate...

catalystxu
Автор

Should note that much of the AI advancement correlates with advancements in raw computing power. One might also want to add another bullet under AI for robotics where AI can interact with the physical world.

jrblihe
Автор

Great! I think autonoums agents are also one of the hottest topics in the area

nachoeigu
Автор

As you said putting information in new format is called generating new content, I want know these LLMs or any other system is able to put information in NEW format or just copying us ?

AryaRishi
Автор

I've been looking at your videos so far. I was just wondering what is the Definition of Foundation model? Is is similar to Multimodal model?

wvgtrod
Автор

I must confess, I'm sceptic on the term "generative" too and I think that the drawn comparison with music lacks. The according to music notes in the realm of textual content would be letter and while from recombination of letters some really new and innovative meaning _could_ be created, it doesn't _have_ to. Same goes for the reassembling of notes.
Similar seems to apply for the "generativeness" of generative AI these days. In general the generated content revolves around what's been learned from the training data. The "creativity" in the meaning of leaving the main trail and coming up with something new _and_ meaningful rather depends on (extraordinary quality of) the input prompt. That said, I think genAI almost always only reflects and produces repetitions of the average with slight variations and hence is overrated by the hype these days considering its contribution of "generative". Look at what and where it is used, the fun and also the more serious areas.
Yes, it is hard to define an objective measure for "generativeness" (try to avoid term "creativity" for its broader use for, well, also pointless stuff), but if taken the idea to just feed next generations of models only with content created by their predecessors, I am really afraid of effects of degeneration and degradation, where mankind has shown progress over generations - at least 'til now.
This stage of AI, where we still have to negotiate the meaning of "I", hasn't found its place and rank in society yet, and I say this as someone who has chosen AI as course program in computer science studies in the 90s, where we, I must say, already dealt with ML but for the acute lack of computing power not with networks deeper than 3 layers (which technically are only 2, yes).

dirkp.