Jump to content
Posted 15 February 2014 - 03:10 AM
Posted 15 February 2014 - 04:06 AM
Posted 15 February 2014 - 08:09 AM
Posted 15 February 2014 - 02:50 PM
No commercially available engine does what you suggest as of yet as far as I know. Some of the tech is patented and there is alot of research in the field. Sound engines for commercial games can be very complex but that complexity isn't so much in the physics but rather resource management. They fake alot of it by just pre-recording proper sample and tagging the physical attributes of the scene correctly. This was with the last gen so CPU resource was limited but with next gen that probably can be all done realtime through physics simulation ( as ur suggesting ).
Posted 15 February 2014 - 04:35 PM
Posted 16 February 2014 - 01:04 AM
All of those probably can, your looking for essentially if they provide a plugin interface for a dsp engine to modify the audio stream. FMOD apparently can do this see this page
I'm sure Xaudio and OpenAL have the equivalent low level interfaces as well. Esp OpenAL as it was written to be a low level audio API language. There are a few commercial libraries already available for plugin into those engines so your best bet is there.
Here is a fun one
Posted 16 February 2014 - 01:42 AM
As I know FMOD it's more a DAW than a sound physics engine, so you can reconstruct the real effects (occlusion, reflection, diffraction, attenuation, auralization) applying the right filters, but it's your manual work there are no engine doing this automatically by evaluating the sound interaction with the environment.
IIRC there's middleware such as AstoundSound, which provides extensions/plugins for the popular sound libraries like FMOD. This gives you more advanced sound mixing/physics while still using an FMOD/etc based engine.
Or I'm just wrong and FMOD do all of that?
Like I said, you can use a standard bit of sound middleware FMOD/Wwise (which are like the core of a DAW), but then add a bunch of extensions to it.
e.g. both of the above can be extended with extra libraries to perform convolutions. The current state-of-the-art (not standard on the previous gen, but some games with top-notch audio used it) is to go to real spaces and record an "impulse", such as a sine-wave or a gunshot. The resulting recording can be processed to give you the impulse-response of that place. You can then take the audio sources in your game and convolve them with this impulse-response, to alter those audio-sources to sound like they would in that physical space. This gives you completely authentic reverb and early-reflections. The more advanced solutions also give good directionality -- e.g. the AstoundSound extension that I mentioned deals with HRTF's, to create the audio-equivalent of "virtual reality" (this is basically another convolution -- convolving the sounds against the impulse-response of a human ear canal to accurately model directional effects, better than surround-sound does!).
Even with pre-computed impulse-responses, this is quite processor intensive, which is why it is not yet common. On the next-gen consoles, or AMD GPU's on PC, you've got a completely programmable DSP within the graphics card, which can now be used to implement these effects much more efficiently (the process requires an FFT and an inverse-FFT per sound source, with a width based on the duration of your reverb effect -- stone corridors with long echos are much more expensive than a typical muffled room).
So far, the stuff I've mentioned relies on measurements made of physical places in the real world.
Ideally, you would use ray-tracing within your virtual world to create impulse-responses for your virtual environments on the fly.
I'm not aware of any middleware that does this, but I'm not an audio specialist, so I might just be unaware of such a product.
If your environments are fairly static, you could also perform this ray-tracing ahead of time, and "bake" out the impulse-responses and save them to disk, switching between them based on the area where the player is (much like you would when using data sourced from the real world).
There are plenty of academic papers describing how to implement such a ray-tracing system though! There's a few games on the previous-gen and on the new consoles that have implemented this themselves in order to get completely dynamic and physically based early reflections automatically, appropriate for whatever environment the player is in.
Edited by Hodgman, 16 February 2014 - 01:48 AM.
Posted 16 February 2014 - 08:58 AM
What games are doing that? Maybe they are unsing some engine or package I can use as well.
There's a few games on the previous-gen and on the new consoles that have implemented this themselves in order to get completely dynamic and physically based early reflections automatically, appropriate for whatever environment the player is in.