GPU Audio: Building the Accelerated Audio Computing Industry of Tomorrow - ADC22

preview_player
Показать описание

GPU Audio: Building the Accelerated Audio Computing Industry of Tomorrow - Alexander Talashov and Jonathan Rowden - ADC22

What do the GPU Industry and Pro Audio have in common? For years, the answer was “not much” to just about everyone. But the dreamers kept dreaming, and quietly, a solution was being built for many years. In 2020 during the height of the pandemic - an entirely remote team of professional musicians, engineers, computer scientists and GPU architects came out of the shadows to begin an open dialogue with both industries, about how it was time for audio processing to get a serious upgrade. In this brief promotional talk, the co-founders of GPU Audio (Alexander Talashov and Jonathan Rowden) will share an overview of their vision of building an accelerated audio computing industry niche powered by GPUs, and invite the spectator to a part of that journey.
What's been done as of today?
- GPU Audio Tech Stack
- First products: Early access and beta
- First reveals: New products and features

What will we do in the midterm?
- SDK release (premiering a portion here at ADC Hands-On)
- Spatial Audio (Mach1 technologies collaboration)
- More integrations

How do we define the future of this technology?

What is required to bring real-time accelerated audio computing for ML-powered algorithms on GPU?
- Machine Learning Frontend, Backend Implementation and API
- Developer Community to Launch by 2023

Growth and Opportunity
- Partnership Opportunities and Hiring
- Vertical integration: how GPU Audio is impacting audio broadly

Slides: link will be updated when available.
_

Alexander Talashov

GPU AUDIO
_

Jonathan Rowden

Hello ADC community, my name is Jonathan Rowden and I am the CBO and co-founder of GPU AUDIO, a new core-technology company focused on unlocking GPU based parallel processing, for greatly accelerated real-time and offline DSP tasks. Our mission is to provide a new backbone of processing for audio developers large and small, and enable a new era of accelerated computing standards, empowering everything from traditional DSP models to Machine Learning and AI based plugins, DAWs and cloud collaboration.
_

_

_

Special thanks to the ADC22 Team:

Lina Berzinskas
Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #gpucomputing
Рекомендации по теме
Комментарии
Автор

For many years now to this day, I have been involved with projects porting audio DSP algorithms to GPU's, running on NVIDIA Tesla GPU's. We did need to port IIR code as well, and were already familiar with many techniques in that field.
The absolute ideal case of IIR algorithms for GPU's is when large delays are involved (like FDN's), and even then a 10x speedup is almost unattainable compared to a well-vectorized CPU implementation. This is due to a fact how GPU threads are *really* slow, so your parallelization factor needs to be in the triple digits for any meaningful performance benefits to start appearing.

Any other IIR algorithm (like filters) with feedback paths that are only a few (or one) samples in length will run much slower on the GPU - like two orders of magnitude slower.
So if the cited 50x speedup was really attained for "traditionally sequential" IIR algorithms, that means this company has achieved a major scientific breakthrough that has implications far beyond the GPU, and will be cited alongside the invention of the FFT as one of the most significant advancements in the field of DSP. Looking forward for more info on that, and any independent benchmarks/comparisons.

Of course another way to achieve parallelization is to run many copies of the non-parallel algorithms. And by many, I mean several hundred. That might work for channel strips, but maybe not so well for the modulation effects. You probably don't want to have hundreds of instances of the same chorus effect, at least not in a DAW.

Modicto