A First Look At Raytraced Audio

preview_player
Показать описание
Register your interest in Raytraced Audio here:

Source code, demos and animations are available here:

More free demos are available here:

I spent the past 7 years creating a game engine, and I figured out a way to use raytracing to get realistic 3D audio. And it can also be used to visualise sound for deaf people.

Music is Shadows, Abberations and Irradiant by Disjoint Square:

Thank you to pyjamsy, Lachy and Nick Iles for helping with the audio for my videos:

Thank you to Rui for The Finals intro clip.

Timestamps
0:00 Intro
0:20 Tricking Your Brain
0:50 The Problem
1:40 Raytracing
2:40 Echo
4:17 Weather
5:07 Permeation
5:46 Deaf Visualisation
6:48 Requirements
7:15 Optimisation
7:37 Code

#gamedev #gamedevelopment #gameengine
Рекомендации по теме
Комментарии
Автор

This video has 2100 comments already, I will get to them all eventually! Here's some answers to the common questions:

- This video took me about 300 hours to make, and the animations are all done in my engine with C# and OpenGL

- There are many Raytraced Audio demos on my channel, but they're a few years old. They still sound awesome but I've made many improvements since then. I'll get a new demo ready, as well as release a sandbox for you to run and experiment with yourself

- One reason this runs on the CPU - and not a compute shader on the GPU - is because Raytraced Audio doesn't need many rays. These demos use 1000 rays, whereas raytraced graphics needs to cast one ray per pixel. On a 1920x1080px screen, that's 2 million rays per frame

- One issue with Rainbow 6 Siege's original audio system was that they changed the 3D position of the sounds based on the raytracing results. I used to do that in my earlier demos, but ditched it because it was confusing and unrealistic. In real life we receive sound waves from all different directions, but our brains are still able to accurately pinpoint where the sound is in 3D space. Changing the positions of sounds in games confuses us, and it's better to let our brains do the heavy lifting instead.

- Materials are on my list. Right now I blend between a wooden_smallroom and wooden_hall reverb preset based on the length of the blue rays. I'd love for the rays to learn about the materials around the player, and construct the reverb properties from scratch instead. Materials would also affect permeation, as different materials block different frequencies

- The voxel-ised version of the world will support rotated models, so that they will match the angle of the ground/walls/etc and correctly reflect sounds. Since this Raytraced Audio system was built for Sector's Edge - a voxel game - I didn't need to consider rotated models until now. I have some pretty cool ideas to keep the raytracing super fast, even though everything isn't constrained to the one voxel grid.

- Diffraction (bendy rays) sounds very interesting, as well as rays that split into multiple rays when they bounce (e.g. rough surfaces vs shiny flat metal surfaces). I have added this to my list of features to add

Vercidium
Автор

You should also add material based absorption. For example wood absorbing sound better than stone or metal. So the more times a sound ray bounces off of wood, the quieter it gets and once a sound ray reaches a certain threshold, it disappears.

infernoplayzgames
Автор

I can't help but imagine this type of audio finesse, and the realism it would add to VR.

thesheep
Автор

the deaf accessibility is incredible, alongside everything else of course

aelamf
Автор

I love the fact that you considered deaf people and then even colorblindness on top of deafness

chybanie
Автор

Audio engineer here. Lower frequencies tend to propagate more evenly and it's crucial for distance perception. Let's say we have a speaker playing a full range song. Higher frequencies come out in more narrow cone, while lower frequencies come out way more evenly in all directions and that's where the losses are. We use this effect to place instruments in the mix, adding/removing the "presence" effect. It really sells the distance.

I'm not sure how you'd do that because the number of computations would increase dramatically(at least i think so), but i think you can just approximate it, cutting lower frequencies up to around 400hz based on the distance the ray travelled.

Also my guess is this is intended for headphones primarily, so to better sell the effect of direction you can apply post processing by cutting/summing in mono lower frequencies since in reality we can't hear low frequencies only in one ear, and with sub(around 60hz and lower) we struggle to determine the direction of the sound.

Also will there be HRTF? Ray tracing is great but i doesn't simulate comb filtering since it simulates particles and not waves, as well as cross-feed based on skull bone absorption.

Anyways this is some great tech and i'd love to try it in a first person POV game!

bukafm
Автор

Another thing to take into account is that lower frequencies should penetrate materials more than higher frequencies. Another interesting thing would be to take into account for diffraction, where lower frequencies also diffract more than higher frequencies, but the effect may not be worth the added complexity/compute. Maybe could split each ray after some number of steps or passing through a small opening into multiple rays pointing in a similar direction with a diffraction angle offset.

oDonglero
Автор

I made an audio ray tracer a while back which did exactly this, but used more from the tracer to get even more realistic audio. Not only did I get delay, echo and volume change, I got resonances, muffling, reverb etc. for free. Every effect could be represented using this. The algorithm which takes the ray tracing data and produces highly realistic sound, as if it was played in real life, doesn't use any approximation or panning depending on the average direction. I called the algorithm DCDBR (Dual-Channel Dynamic Buffer Redistribution) and all it does, no trickery, is just distribute buffers of audio in a larger buffer depending on the ray tracing data. There is nothing really fancy about it either. The ray tracer also accounts for the materials absorption and I've thought about adding transmission too.

To give a perspective on the performance, the DCDBR runs at a constant speed that generally doesn't depend on the amount of maximum delay, only with how many ray tracing "samples" is used. For a small room, it runs at 0.08ms per 23ms buffer (1024 samples in 44.1kHz sample rate) and a large hall with 3 second max delay, about 0.12ms. I've been developing this for some time now and part of it is available in my game engine already (not the ray tracer, only the DCDBR and the slightly worse DBR).

There is another thing I've worked on which can deconvolve a reference audio with a recording from real life to create samples that can be fed into the DCDBR and make any audio sound exactly how it would've sounded like if it was played from the same speaker in the recorded reference audio. But the thing is, you only have to deconvolve a reference audio once and then take the result and use it in-game to play.

I think that this could be game-changing considered the performance and realism of the result.

Since i can't seem to post direct links, I edited this instead. The repo for the game engine is called PolyrayGameEngine and the algorithm is in the audio directory called DCDBRNative

alexlindgren
Автор

This is incredible stuff man, especially helping the deaf "see" sounds. People often forget how important sound is in games, love to see this.

Jasonbean
Автор

there's so much focus on graphical fidelity even though graphics have most definitely already peaked, and are already plateauing. Sound design and the auditory experience in games is such an insanely under-rated area!
Not simply music, but the actual sound-physics you hear in games is something i'd love to see more innovation in!

Psrj-ad
Автор

Everyone's already said how amazing the raytracing is, but I want to say how absolutely GORGEOUS your visuals are. They are so fluid and organic, it feels like I'm watching a futuristic holodeck come into and out of reality every time the floor is created, it's phenomenal and so beautiful I can't even comprehend how long it all took. And the fact you wrote it all yourself? So cool. This above anything else inspires me to continue programming

shovelsquid
Автор

Ironically, the whole visualizing sound thing could lend itself to a Dare Devil game someday.
Great stuff here mate, and a top notch presentation of it too.

MisterSynonym
Автор

I'm almost 100% certain that Ray Traced audio in games will have a much bigger impact on immersiveness and feel of games than any Ray Traced lighting.
Just like in movies, you won't care much if the image is subpar as long as audio is great, but as soon as you cripple the audio, no amount of visual fidelity will keep you immersed in the movie.

igor_mkv
Автор

This is honestly more revolutionary for immersion than the graphical upgrades that get marginally better while demanding ridiculous computing power. Keep up the amazing work!

Deeren
Автор

As an audio engineer, there is one thing that was seemingly overlooked, that I haven't seen anyone else address, and that would be the dropoff of your "echo". The rays seem to live forever, when they should be weakening, due to the fact that soundwaves gradually lose energy, even more so when they reflect off a surface.

minecraftWithDanielD
Автор

This sounds like what Aureal was trying to do back in the early 2000's. It was so good in fact that that Creative Labs sued Aureal out of business, and then bought up all their assets for next to nothing. Had Aureal not been a startup and had the funds to fight the lawsuit, they would very likely have won, as Creative's premise was that "3D audio" was proprietary to them alone. You don't need a law degree to know that was clearly ridiculous, but it worked.

MDFGamingVideo
Автор

The accessibility for deaf people is INCREDIBLE!! I'm hard of hearing myself, but even if I wasn't, it would be nice to have that accessibility even while people around me are talking, or while there's construction outside or something.

psychoDon
Автор

Play around with an effect called “comb filtering”. It fundamentally recreates what we hear when sound reflections collide- some frequencies add together, some cancel each other. With a light touch, it should feel so familiar that it’s like a cheat code. The point of diminishing returns comes quick. You’re onto something great.

Tofupancho
Автор

FINALLY SOMEONE TALKING ABOUT THIS!!
Raytracing has so much more potential for audio processing rather than visual processing and taking into consideration that games are literally a audio-visual medium this would make much more of an impact since the industry pretty much peaked in terms of graphics.

nullifier_
Автор

This is incredible work, instant sub. I'm eager to see what you come up with next

orrin-manning