Generative Ai vs. AGI: Strengths and Weaknesses of Large Language Models with Dr. Ben Goertzel

preview_player
Показать описание
🌐 Explore the frontiers of AI with Dr. Ben Goertzel in "Generative AI vs. AGI: Strengths and Weaknesses of Large Language Models." This enlightening video contrasts the cognitive capabilities of Large Language Models (LLMs) like ChatGPT and GPT-4 with human intelligence, emphasizing the limitations of current AI in achieving true Artificial General Intelligence (AGI). Dr. Goertzel, a seasoned AI scientist and leader in the AI community, discusses the integration of LLMs with advanced tools like knowledge graphs and evolutionary learning for enhanced data analytics and reasoning. This video is a rich resource for those interested in AI, machine learning, natural language processing, and data science, offering insights into the future of AI and its applications. Key topics include machine learning, deep learning, NLP, data science, and the future of AI careers. Join us for a thought-provoking journey into AI's potential and limitations.

🔔 Subscribe for more insights into AI advancements and discussions on machine learning and data science. Like, share, and engage in the AI revolution!

#AI #ArtificialIntelligence #MachineLearning #DeepLearning #NLP #DataScience #DataVisualization #MachineLearningCourses #ReinforcementLearning #GitHub #DataEngineering #AGI #GenerativeAI #LLM #ChatGPT #GPT4 #DataScienceCareers

Timecodes:
0:00 - Introduction
2:25 - LLMs
3:50 - LLM Shortcomings
12:01 - Standard Model of Mind
20:30 - WebAgent from Deepmind
24:43 - Hybridizing LLms with Logical Inference
28:42 - In closing

You can also follow ODSC on:
Рекомендации по теме
Комментарии
Автор

good contribution ... sir amzing video ....

johnclay
Автор

Ben always dropping knowledge bombs in the most random youtube channels lol

leonlysak
Автор

lots of improvement happening right now to llm but people have to go to efforts to fully unbox capabilities - using multi models, uncensored models, p2p data and model training, and especially real time data agg - once these barriers are overcome then we will see much better commercial use llm and it will happen even if it is a gradual process - the domain creep is real and the limitations and compromises will be diminished as we go brom big tech ai to really open source ai in the next 5 years - the hw/sw stacks have to mature and catch up as well but it is clear that this will happen - cxl and other accelerators will have massive impacts and once they do filter down from data centers to prosumers and home labbers and the small/med biz sector then we will see economies of scale kick in and much more innovation and development - not is not the time and although it may appear to be sort of a fever dream it probably will happen - more i/o like pci-e v6 is going to help, usb5 and faster networking will also help.

shephusted
Автор

Im an avid admirer of Ben's native language in computer neural networks. Not so much with his Noosphere linguistics (⁠◔⁠‿⁠◔⁠)

missh