AI Researcher Debunks Michio Kaku's Joe Rogan Episode

preview_player
Показать описание
AI research scientist breaks down Michio Kaku's claims on Joe Rogan's podcast #1980. Debunking the idea of quantum computers as fact checkers, and an incorrect explanation of how LLMs like ChatGPT work!

Timestamps:
00:00 Introduction
00:55 How LLMs Work
02:57 Why It Matters
04:46 AI and Quantum Computing

ABOUT ME:
My name is Maxime and I’m an AI researcher. I studied at the intersection of AI and robotics for both my bachelor’s and master’s degrees at the University of California, Berkeley. I’m currently a research scientist in industry, and I was formerly at Berkeley Artificial Intelligence Research, one of the world’s foremost AI research groups.

I’ve worked on robotics, automated skin cancer detection, neural-network-based image compression, and behavioral modeling to detect financial crimes.
Рекомендации по теме
Комментарии
Автор

The part that doesn't make sense to me is how someone who is an expert in physics could not know that deriving absolute truth from a string of language is unsolved. He was clearly reading prewritten statements, and I think that they sounded like marketing buzz for research grants.

lucidvizion
Автор

Finally someone said it. Kaku has no idea what he is talking about the whole episode

maxlarionov
Автор

Given that he has already provided many incorrect information in the field of physics, I would hesitate to label him as an expert in the subject.

ngaous
Автор

Nicely done man. You explained it well without making Kaku look bad. Appreciate your work.

rameenana
Автор

Glad that youtube recommended this video to me. Learned a lot, and as someone who would love to learn more about AI and was pleased by the way you explained, subbed to the channel.

samirkarki
Автор

Holy shit! As a normie that understands zero about computer programming, I had no idea Kaku was totally wrong. I just listened to the video and though “oh that explains it” and went about my day Had I not seen this video, I’d still believe this stuff. Thank you for putting this video together.

thekingofthisworld
Автор

Please subscribe to the channel and share this piece with your friends! Let me know if you want more breakdowns like this of AI-related content. And of course, no disrespect to Michio Kaku -- he's an exceptional physicist and a great science communicator, but peer review like this is part of science!

maximejkb
Автор

Your content is sincerely great my friend, so glad I found it. As a professional illustrator and concept artist the way you explained why AI is not a tool helped me greatly to put into words why I don't use it to work (cause it does the work in my stead, making everything about it meaningless). Keep it up man, super interesting!

Stalliere
Автор

I really liked your respectful approach and how you broke down and explained the topics so clearly. Thanks!

soysorray
Автор

Thankyou for your time, knowledge and valued opinion brother - keep going for us.

TARRAN_
Автор

120 likes 2000 views WTF this videos is such high quality content, high quality information, and high quality learning im baffled really! I can't imagine for second that you're not going to expand rapidly on youtube with videos as relaxing as this one, keep going man you got a subscriber😗✌️

eliastabuteau
Автор

We love these debunking videos... Nice work lad

UnknownUser-inok
Автор

Well said. It is so annoying to hear people outside the industry explain these things.

BryceChudomelka
Автор

Thank you for releasing this video, it clarified some questions I had about that episode!!

christinetran
Автор

All the sting theorists are turning out to be hacks...

dexterdrax
Автор

from a stanford kid (go card!) this was a cool and refreshing vid. thanks for digging into this stuff and for pointing out the nonsense stated by some of our "thought leaders"

you touched on some interesting stuff regarding AI hallucination. i hope AI doesn't place a box around human imagination or dull future humans desire to figure out the impossible. I can imagine a world 100 years from now where AI reinforces facts that are wrong or simply only relative

ghost_in_the_machine
Автор

AI does not "internalise" anything, beyond saving a state of current weightings of a model. It is an enormous stretch to think of this as internalising something - especially when it so literally hinges on external verification. I imagine if you work with AIs to the point they become black boxes, it must feel very much like something has been internalised, but it's a philosophical nightmare to think about an LLM using such anthropocentric language.

cmck
Автор

if you prompt 'AI hallucination' as is to ChatGPT, then it returns "I don't have the capability to hallucinate or generate sensory experiences like hallucinations"... well in fact you re wrong, you hallucinate without even realizing it :)

willclavel
Автор

I knew this guy was full of sht when he started predicting the future and saying we will have flying cars and stuff. He should be a writer and Marvel and not much else.

alpacamale
Автор

This is a huge problem with people like Kaku, Hossenfelder, etc. They think their opinion on everything matters and people seem to agree.

driesvanoosten
welcome to shbcf.ru