POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions

preview_player
Показать описание
From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods, open-ended learning and curriculum learning.

Abstract:
While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula, and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. The Paired Open-Ended Trailblazer (POET) algorithm introduced in this paper does just that: it pairs the generation of environmental challenges and the optimization of agents to solve those challenges. It simultaneously explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems if better, catalyzing innovation. The term open-ended signifies the intriguing potential for algorithms like POET to continue to create novel and increasingly complex capabilities without bound. Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges. The ability to transfer solutions from one environment to another proves essential to unlocking the full potential of the system as a whole, demonstrating the unpredictable nature of fortuitous stepping stones. We hope that POET will inspire a new push towards open-ended discovery across many domains, where algorithms like POET can blaze a trail through their interesting possible manifestations and solutions.

Authors: Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley

Links:
Рекомендации по теме
Комментарии
Автор

I would be interested in seeing some kind of Differentiable Adversarial Trainer.

There would be two agents. The Normal Reinforcement Agent and an adversarial trainer agent. The trainer agent would generate levels such that the normal Agent gets a very specific score. It gets penalized if the normal agent outperforms or underperforms the level. This way it will keep things challenging but not impossible.

herp_derpingson
Автор

Wow so many papers in rapid succession.

herp_derpingson
Автор

Thanks for introducing the paper! learned a lot!

tenghuilai
Автор

Do you have a patreon or something we can use to support you?

petroschristodoulou
Автор

Amazing video as always Yannic! Looking forward to catching up tomorrow on this 😃

machinelearningdojo