filmov
tv
What is interesting to an AI agent?
Показать описание
AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature’s boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of large language models and reinforcement learning, these AI agents continuously develop new skills, explore uncharted domains, and even cooperate with one another in complex tasks.
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.
***
A central theme throughout Clune’s work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart’s Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation.
Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI’s capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols.
Jeff Clune:
(Interviewer: Tim Scarfe)
TOC:
1. Introduction
[00:00:00] 1.1 Overview and Opening Thoughts
2. Sponsorship
[00:03:00] 2.1 TufaAI Labs and CentML
3. Evolutionary AI Foundations
[00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches
[00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery
[00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem
[00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces
4. System Architecture and Learning
[00:37:35] 4.1 Code Generation vs Neural Networks Comparison
[00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems
[00:47:00] 4.3 Language Emergence in AI Systems
[00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques
5. AI Safety and Governance
[00:53:56] 5.1 Language Model Consistency and Belief Systems
[00:57:00] 5.2 AI Safety Challenges and Alignment Limitations
[01:02:07] 5.3 Open Source AI Development and Value Alignment
[01:08:19] 5.4 Global AI Governance and Development Control
6. Advanced AI Systems and Evolution
[01:16:55] 6.1 Agent Systems and Performance Evaluation
[01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions
[01:26:46] 6.3 Evolution Algorithms and Environment Generation
[01:35:36] 6.4 Evolutionary Biology Insights and Experiments
[01:48:08] 6.5 Personal Journey from Philosophy to AI Research
Shownotes:
We craft detailed show notes for each episode with high quality transcript and references and best parts bolded.
CORE REFS:
[00:02:35] POET: Generating/solving complex challenges | Wang, Lehman, Clune, Stanley
[00:11:10] Why Greatness Cannot Be Planned | Stanley, Lehman
[00:17:05] Automated capability discovery in foundation models | Lu, Hu, Clune
[00:18:10] NEAT: NeuroEvolution of Augmenting Topologies | Stanley, Miikkulainen
[00:26:50] Novelty search vs objective-based optimization | Lehman, Stanley
[00:28:55] AI-generating algorithms approach to AGI | Jeff Clune
[00:41:10] Learning Minecraft from human gameplay videos (VPT) | Baker, Akkaya et al.
[00:44:00] Thought Cloning: Imitating human thinking | Hu, Clune
[01:15:10] Automated Design of Agentic Systems (ADAS) | Hu, Lu, Clune
[01:32:30] OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness | Faldor, Zhang, Cully, Clune
SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?
They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.
***
A central theme throughout Clune’s work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart’s Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation.
Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI’s capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols.
Jeff Clune:
(Interviewer: Tim Scarfe)
TOC:
1. Introduction
[00:00:00] 1.1 Overview and Opening Thoughts
2. Sponsorship
[00:03:00] 2.1 TufaAI Labs and CentML
3. Evolutionary AI Foundations
[00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches
[00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery
[00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem
[00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces
4. System Architecture and Learning
[00:37:35] 4.1 Code Generation vs Neural Networks Comparison
[00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems
[00:47:00] 4.3 Language Emergence in AI Systems
[00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques
5. AI Safety and Governance
[00:53:56] 5.1 Language Model Consistency and Belief Systems
[00:57:00] 5.2 AI Safety Challenges and Alignment Limitations
[01:02:07] 5.3 Open Source AI Development and Value Alignment
[01:08:19] 5.4 Global AI Governance and Development Control
6. Advanced AI Systems and Evolution
[01:16:55] 6.1 Agent Systems and Performance Evaluation
[01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions
[01:26:46] 6.3 Evolution Algorithms and Environment Generation
[01:35:36] 6.4 Evolutionary Biology Insights and Experiments
[01:48:08] 6.5 Personal Journey from Philosophy to AI Research
Shownotes:
We craft detailed show notes for each episode with high quality transcript and references and best parts bolded.
CORE REFS:
[00:02:35] POET: Generating/solving complex challenges | Wang, Lehman, Clune, Stanley
[00:11:10] Why Greatness Cannot Be Planned | Stanley, Lehman
[00:17:05] Automated capability discovery in foundation models | Lu, Hu, Clune
[00:18:10] NEAT: NeuroEvolution of Augmenting Topologies | Stanley, Miikkulainen
[00:26:50] Novelty search vs objective-based optimization | Lehman, Stanley
[00:28:55] AI-generating algorithms approach to AGI | Jeff Clune
[00:41:10] Learning Minecraft from human gameplay videos (VPT) | Baker, Akkaya et al.
[00:44:00] Thought Cloning: Imitating human thinking | Hu, Clune
[01:15:10] Automated Design of Agentic Systems (ADAS) | Hu, Lu, Clune
[01:32:30] OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness | Faldor, Zhang, Cully, Clune
Комментарии