16 - Simulation on the GPU

preview_player
Показать описание
I teach you all you need to know to write your own physics simulations on the GPU! I use python in connection with the nvidia python extension warp as a simple way to write GPU simulations.

Links:

Ten Minute Physics Page:
Nvidia Warp Documentation:
Warp on GitHub:
Installing Python with Visual Studio Code:
Python extensions to get PyOpenGL:
Рекомендации по теме
Комментарии
Автор

This is really incredible content.
My deepest gratitude.

sean_vikoren
Автор

In some paper (and the presentation slide in Eurographics 17 by Macklin) I've seen the word 'tail' used accounting for a hybrid method but I had no clue what that tail meant without seeing the distribution graph, and thus couldn't understand what "process everything else with Jacobi" meant in the slide. Finally I found the answer!
Your tutorial video is like a gift to who struggles to understand PBD by oneself outside the academic field. I've read someone saying that it's a great supplement to PBD papers and I totally agree. Not only for PBD, the tutorials help me learn implementing physics simulation nicely in general. I deeply appreciate your sharing the knowledge.

steelrainbow
Автор

This is so great, thank you so much! Will be super helpful when I will self study physically based animation next semester

aditya_a
Автор

Love it. Keep making more! Especially more stuff with shaders! Thanks.

imaginingPhysics
Автор

This series is truely amazing! Thank you for your effort and content!

vn
Автор

New subscriber here. I have to commend you for citing a previous video (#9), and then seeing that you number your videos so "#9" could actually be located! That's a great touch I wish other y-tbrs' did that. 😎

fizixx
Автор

Awesome example! I just submitted a PR to the pages repo which improves the framerate by ~15fps on my 3080 (with a minor tweak to the targetFps timing code). Thank you for these informative examples!

sublucid
Автор

looking forward to watching next videos

seeyouinthenextlifeprobably
Автор

What a timing, just yesterday left a comment about wondering how parallelization issues like these are solved by the pros.
Okay so now how do you do self-collision? I feel like you doing the hashing only every N substeps was foreshadowing something ...
But excellent stuff, thank you.

anomyymi
Автор

Very looking forward to the next tutorial😶

okifunearl
Автор

Thank you sir very much for this amazing tutorial.

zeyingxu
Автор

This is exactly what I've been looking for! I couldn't find anything on doing physics on GPUs. I think one of the reasons was that the constraints contain branching, which GPUs aren't good for. I'm not sure if that's correct, so please let me know if I'm wrong there. Also, I'm wondering why you move the positions out of the GPU in order to render them. Aren't they in the perfect spot to be rendered? The GPU is where the particle shaders are after all. If I'm confused, please let me know, thanks!

samuelyigzaw
Автор

I watched all of the previous tutorials to finally get here. I was hpping to see shader code and some fragment only implementation (think shadercode). Instead it looks to like mostly python wrapped to then run compute shaders?

Veptis
Автор

Why are you copying the positions back to CPU, just to push them back to the GPU again?

DasAntiNaziBroetchen
Автор

I am new to this but I just want to know why cant conjugate gradient be used to slove the constraints in parrarel

naztar
Автор

It seems that scaling the correction vectors by 1/4 is prone to explosions in high detailed tetrahedral meshes. To compensate, I scale the correction vectors by 1/(num adjacent constraints) but as you state in the notes, this causes a violation of energy conservation (which I am observing in my implementation). Do you have any tips on how to recover from this while keeping the simulation stable?

oostenjadenhtv
Автор

Great video! Warp sounds like a wonderful invention. But, the graph edge colouring sounds a bit unclear. Could you make a separate video on that ?

r.d.
Автор

Unfortunate that you use X for position and P also for position. This is very confusing for me because P usually is used for momentum in physics, which is vector velocity times mass for a particle.

perpetualrabbit
Автор

Mathias - great video. I would like to run the code on a google colab runtime. Do you have any hints? Thanks!

WinstonDodson
Автор

Really fantastic class. I've been following every new video. It's really generous of you, I've been learning a lot and I'm very grateful.
I have a question: I see that colored graphs are used to be able to parallelize the springs solver without using "mutexes" (I'm not a GPU expert, so not sure this is the right terminology).
When the Jacobi Solver is used instead (for stretch/bend springs, from what I can understand), constraints are not colored, and the result is written to a temporary delta corrections array right away.
What I don't understand is, if we don't use colored constraints here, won't I still incur into racing conditions when writing to the temporary corrections array? I didn't see the use of any mutex in the code to avoid this for this specific case.
Thank you again!
EDIT:
Sorry, after re-watching the video, I realized the the atomic add command takes care of thread synchronization.

ZioBlu
visit shbcf.ru