Is AI Reasoning Key to Superintellignece? - Anders Sandberg

preview_player
Показать описание
Anders Sandberg discusses AI reasoning, the new OpenAI model o1 and it's reasoning capabilities.

0:00 Intro
0:08 What is interesting about GPT o1?
1:31 o1 scoring high on IQ tests
2:15 The 'g' factor: Human vs AGI?
7:34 Will the current LLMs lead to an AI takeoff?
8:52 AI has struggled with some problems
10:37 AIs methodology for problem solving is getting better
11:37 AI sanity checking it's logic?
12:26 AI self-correcting reasoning and the slope of the intelligence explosion
13:04 AI investment returns may not track AI capability gains
13:56 AI accelerating research & the problem of hallucination
18:23 LLMs, reinforcement learning - hybrid AI
20:09 AI agents and AI safety
21:43 Hidden chain of thought reasoning
23:24 OpenAI System Card and AI Safety (biological & persuasiveness risks)
26:24 Indirect Normativity, metaethics, moral realism & AI safety
33:08 Evaluating AI for safety - translating moral truths
37:11 AI, governance, and coordination problems
43:59 Global coordination & AI
45:44 Book 'Grand Futures' in development
46:53 Book 'Law, AI and Leviathan' coming soon

#Strawberry #OpenAI #AGI

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?

Kind regards,
Adam Ford
Рекомендации по теме
Комментарии
Автор

Many thanks to Anders Sandberg!
Here are the chapter times:
0:00 Intro
0:08 What is interesting about GPT o1?
1:31 o1 scoring high on IQ tests
2:15 The 'g' factor: Human vs AGI?
7:34 Will the current LLMs lead to an AI takeoff?
8:52 AI has struggled with some problems
10:37 AIs methodology for problem solving is getting better
11:37 AI sanity checking it's logic?
12:26 AI self-correcting reasoning and the slope of the intelligence explosion
13:04 AI investment returns may not track AI capability gains
13:56 AI accelerating research & the problem of hallucination
18:23 LLMs, reinforcement learning - hybrid AI
20:09 AI agents and AI safety
21:43 Hidden chain of thought reasoning
23:24 OpenAI System Card and AI Safety (biological & persuasiveness risks)
26:24 Indirect Normativity, metaethics, moral realism & AI safety
33:08 Evaluating AI for safety - translating moral truths
37:11 AI, governance, and coordination problems
43:59 Global coordination & AI
45:44 Book 'Grand Futures' in development
46:53 Book 'Law, AI and Leviathan' coming soon

scfu
Автор

Ha ha hey hello Anders, what beer shall we have next? LOVE the hair.

Khannea