Prey vs Predators - preparing bigger simulation

preview_player
Показать описание
Optimizing my prey vs predators project for future bigger simulations.

00:00 Introduction
02:00 Data optimization
04:00 Neural Network optimization
05:40 Space partitioning
06:30 Multithreading

Рекомендации по теме
Комментарии
Автор

i want to suggest something: adding objects and obstacles into the "arena" so that the "agents" can evolve to use those to their advantage, similar to how animals have evolved to use certain land features for cover or nesting

Space_Reptile
Автор

As a partially colorblind person, I find the new colors harder to differentiate than the old ones. I would recommend that you use light orange and dark blue, as they are the most easily distinguished colors across all forms of color blindness. I'm still glad that you bothered to think about colorblind people in the first place, though :)

jackfrederiksen
Автор

I really really liked the visualization of the concepts you discussed in this video, these videos are amazing.

sunbear
Автор

This video is awesome! The explanatory animations are icing on the cake. What software did you use to make them?

louisdalibard
Автор

As ml engeener, I don't think that for such small networks topology is crucial for interesting behaviour. Even with fixed topology (but with mutating weights) you can get impressive results in supervised or reinforcement learning tasks (see hide and seek multi-agent project from OpenAI, not an evolution tho). But! With fixed topology you can store weights simply as matrices and forward pass as matrix multiply. With synchronized step for all agents, you can even step on all env at once (concatenate weights from all agents, matrix multiply on gpu). Usually with such setup, MASSIVE simulations are possible.

howuhh
Автор

Many people will take the animated explanations for granted, but they are amazing. They do a really good job of helping to explain

themixedmaster
Автор

Really cool to see such a concrete example of optimization done at the right time -- when it's needed. You could've easily just hand waved this in the next video, but it's awesome that you took the time to make this interstitial video 🙌.

cognisent_
Автор

I know nothing about coding or programming, but your explanations are very clear and easy to understand!
Also big props for the production quality. Those graphics are really nice and help a lot in conveying what you are doing.
Keep it up!

verbalbbq
Автор

You can optimize your multithreading even further by taking into account a "complexity rating" while queueing up tasks: Long tasks being executed at the end would currently block the frame until the last long task finishes. If you can rate how long tasks will take, assigning the longer tasks to workers first will improve consistency and speed of frames. You can do this either by hand "guessing", or dynamically using some sort of profiler and then assigning the tasks that took long on one frame a higher priority on the next.

grimmauld
Автор

Love this stuff, helps me with my coding.

Sharlenwar
Автор

So glad you're continuing this project

Non-disjunction
Автор

I can’t even imagine how much work went into animating this video. Awesome job! Your videos are each masterpieces.

Frap
Автор

I loved the explanations of the optimizations. So informative and concise! Your voice is very soothing. I wish you had videos simply explaining different algorithms, computer science students around the world would eat that up with the quality of these animations and the production quality.

andrewberntson
Автор

It seems like the optimization with the neural networks was needed due to the sparse nature of your neural network. But I wonder since GPU's nowadays are very optimized to preform matrix multiplications if it would be faster to have the neural network instead be fully connected but with the unwanted connections' weights set to 0 and frozen during training, so that the weights for each layer could become a 2d array and the multiplication could be done on the gpu. But then again I don't think the neural network here is the bottleneck anyway.

jasonbourne
Автор

I can't explain how much I love optimisation, it's so satistfying.

exanc
Автор

Hi Pezzza,
Your first video was really great, it got me motivated to play a bit with evolving agents too. I did notice the exact same problem you have here: It gets slow with a lot of agents, and the majority of the time is spent on calculating the networks.
The solution that worked for me was to just do all network calculations on the gpu, this allowed 60k+ agents in realtime (depending on net complexity of course). Its adds more complication with the memory management, but I would assume it is the only realistic solution to get a high agent count in realtime, otherwise just the number of floating point operations required for the network will probably hit the limit of the cpu.

johnblue
Автор

Inspiring! Optimization, when it works, is probably the most satisfying part of programming

dhudetz
Автор

My guy! What great videos! Can’t believe I haven’t found you til now!

Weberbros
Автор

this stuff is amazing! I cannot believe I missed the upload. I love that you're making the simulation larger.

Scrawlerism
Автор

The visuals of the data structures is gorgeous!

MarcCastellsBallesta
join shbcf.ru