How Meta’s Chief AI Scientist Believes We’ll Get To Autonomous AI Models

preview_player
Показать описание
Meta’s Chief AI Scientist Yann LeCun discusses why he supports open source large learning models and why models need to live in the world to achieve autonomy.

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

Stay Connected

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.
Рекомендации по теме
Комментарии
Автор

The interviewers were so busy trying to show how smart and relevant they are that they stepped on the interview quite a bit. I’m sure they are brilliant but I would have appreciated if they let the interviewee speak for himself.

causeneffect
Автор

Welcome to Forbes; brilliant guest, terrible interviewer and guy that cut Yann off at the end. Super disrespectful.

GatherVerse
Автор

The corporate idiots kept interrupting the smartest guy in the room.

wolfsblade
Автор

The announcer has assumed on his own that 2000 of the smartest people "on the planet" are sitting in his audience in Cambridge.

fhtggfp
Автор

Perfect example why long form podcasts like Joe Rogan are so much better. Yann kept getting cut off. Would love to hear his complete views.

philtrubey
Автор

00:06 Discussion on the release of the advanced AI model Lama 3 with 15 trillion tokens
02:08 Open sourcing AI models enables new opportunities for startups.
04:11 Challenges in scaling up AI learning algorithms
06:17 Importance of open-source infrastructure in AI development
08:22 Advancing towards Autonomous Machine Intelligence
10:29 Training AI systems to understand the world like baby animals and humans.
12:33 Training autonomous AI models using encoder and predictor in representation space.
14:31 Future AI models will likely be one big modular system, with debate over early fusion vs late fusion for multimodal systems.
16:27 Advantages of rapid learning for AI systems

lootster
Автор

Amazing insights from Yann LeCun! THANK YOU so much for sharing!

breaktherules
Автор

Take away: we are closer to the beginning than the end of our LLM journey.

JustAThought
Автор

Hmm, I would have liked to hear Yann for another, hmmm, hour.

lighteningrod
Автор

Chapters (Powered by ChapterMe) -
00:00 - Amazing coincidence Llama 3 drops during meeting
01:04 - Llama 3 developer I deserve no credit
02:01 - Celebrated inventors pioneering work
02:58 - Training model with 30bn worth of GPUs staggering
04:09 - Opensourced AI Waiting for breakthroughs, no precedent
05:09 - Metas Open Source AI Faster, Secure, Communityled
08:00 - LLMs lack of experience, VJEPAs vision
08:31 - AI research longterm vision, open review
08:46 - Advanced machine intelligence limited reasoning, memory, planning
10:06 - Designs for AI systems that understand world
11:00 - Deep learning architectures for video prediction
13:48 - Intelligence through multimodal training
17:11 - Chia pet at Davos fun, optimistic, doomers
17:47 - Jan Well done, thank you

danecjensen
Автор

I was about to watch the "interview", but reading the comments changed my mind.

pladselsker
Автор

Sounds like we gotta combine V-JEPA with liquid networks. Liquid networks were able to drive a car with 19 neurons. Videos are often continuous kinematics also since they follow the rules of physics.

adamgm
Автор

I question the % of the population that are actually fortunate enough to be able to make use of such "open source" models.

elhdizm
Автор

“How do we train the AI how the world works?”, You let the AI play Grand Theft Auto that’s how. 😂

eysmanobando
Автор

Running a 400 billion parameter dense model, even in an open model environment, requires substantial computational resources. The sheer size of these models demands high memory, powerful processing capabilities, and considerable costs. Given these requirements, it seems that using such a large dense model could defeat the purpose of an open model environment, which is meant to be more accessible. Does anyone else think that the accessibility of open models is compromised by these large-scale requirements?

kuf
Автор

By YouSum Live

00:02:24 Open Source Revolution.
00:05:08 Advancements in AI Architecture.
00:05:09 Importance of Open Source Infrastructure.
00:08:01 Future of Autonomous Machine Intelligence.
00:08:11 Joint Embedding Predictive Architecture (JEA).
00:10:07 Enhancing AI with Real-World Experiences.
00:15:34 Modular Approach to Massive Models.
00:15:55 Striving for Common Sense in AI.
00:16:22 Potential of AI in Everyday Tasks.

By YouSum Live

ReflectionOcean
Автор

I believe the GPUs bought by Meta will be considered as assets and not expense...

leeme
Автор

Some 12 year old will figure out how to use this and take over the planet.

drsuperhero
Автор

10:16 What makes him think there will be guarantees that ASI will be controllable?

geaca
Автор

Spacetime tokens, like in Sora, is the solution

spinningaround