Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2) 🚀 🚀 🚀

preview_player
Показать описание
In this video, you will learn how to accelerate image generation with an Intel Sapphire Rapids server. Using Stable Diffusion models, the Intel Extension for PyTorch and system-level optimizations, we're going to cut inference latency from 36+ seconds to 5 seconds!

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️

Рекомендации по теме
Комментарии
Автор

Beautiful optimization lesson, thanks!
Would it make sense to combine both approaches? Or openvino is actually performing some of those behind the scene?

luiztauffer
Автор

i can't believe you're running this on mac omg. is this M1 silicon?

mellio