Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your feedback on a survey! Each completed response supports our community and gives you a chance to win a $25 Amazon gift card!


Game engine with realistic sound physics


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 davide445   Members   -  Reputation: 108

Like
0Likes
Like

Posted 15 February 2014 - 03:10 AM

Want start developing a simulation of a real environment.
Using a game engine such as Unity, Unreal or CryEngine will be fine for the graphics part, but I'm concerned about the sound.
Reading about FMOD and other sound systems appear to me they are just limited in term of sound physics, such as no sound reflection is taken in account.
There are any game engine with built in good sound physics engine? Or need I think to integrate external specialized modules into these engines?

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 32069

Like
0Likes
Like

Posted 15 February 2014 - 04:06 AM

IIRC there's middleware such as AstoundSound, which provides extensions/plugins for the popular sound libraries like FMOD. This gives you more advanced sound mixing/physics while still using an FMOD/etc based engine.

#3 davide445   Members   -  Reputation: 108

Like
0Likes
Like

Posted 15 February 2014 - 08:09 AM

As I know FMOD it's more a DAW than a sound physics engine, so you can reconstruct the real effects (occlusion, reflection, diffraction, attenuation, auralization) applying the right filters, but it's your manual work there are no engine doing this automatically by evaluating the sound interaction with the environment.

Or I'm just wrong and FMOD do all of that?

#4 ddn3   Members   -  Reputation: 1331

Like
0Likes
Like

Posted 15 February 2014 - 02:50 PM

No commercially available engine does what you suggest as of yet as far as I know. Some of the tech is patented and there is alot of research in the field. Sound engines for commercial games can be very complex but that complexity isn't so much in the physics but rather resource management. They fake alot of it by just pre-recording proper sample and tagging the physical attributes of the scene correctly. This was with the last gen so CPU resource was limited but with next gen that probably can be all done realtime through physics simulation ( as ur suggesting ).



#5 davide445   Members   -  Reputation: 108

Like
0Likes
Like

Posted 15 February 2014 - 04:35 PM

This was my fear.

What's currently the better (even if limited) sound engine, that at least consider the basic physics such as occlusion, reflection, attenuation? FMOD, Xaudio, OpenAL, others?

#6 ddn3   Members   -  Reputation: 1331

Like
0Likes
Like

Posted 16 February 2014 - 01:04 AM

All of those probably can, your looking for essentially if they provide a plugin interface for a dsp engine to modify the audio stream. FMOD apparently can do this see this page 

 

http://www.dspdimension.com/admin/using-dirac-with-fmod-to-change-pitch-and-speed-of-audio-in-real-time/#more-809

 

I'm sure Xaudio and OpenAL have the equivalent low level interfaces as well. Esp OpenAL as it was written to be a low level audio API language. There are a few commercial libraries already available for plugin into those engines so your best bet is there.

 

Here is a fun one

 

http://www.littleendian.com/developers/

 

Good Luck!

 

-ddn



#7 Hodgman   Moderators   -  Reputation: 32069

Like
1Likes
Like

Posted 16 February 2014 - 01:42 AM

IIRC there's middleware such as AstoundSound, which provides extensions/plugins for the popular sound libraries like FMOD. This gives you more advanced sound mixing/physics while still using an FMOD/etc based engine.

As I know FMOD it's more a DAW than a sound physics engine, so you can reconstruct the real effects (occlusion, reflection, diffraction, attenuation, auralization) applying the right filters, but it's your manual work there are no engine doing this automatically by evaluating the sound interaction with the environment.

Or I'm just wrong and FMOD do all of that?

Like I said, you can use a standard bit of sound middleware FMOD/Wwise (which are like the core of a DAW), but then add a bunch of extensions to it.
e.g. both of the above can be extended with extra libraries to perform convolutions. The current state-of-the-art (not standard on the previous gen, but some games with top-notch audio used it) is to go to real spaces and record an "impulse", such as a sine-wave or a gunshot. The resulting recording can be processed to give you the impulse-response of that place. You can then take the audio sources in your game and convolve them with this impulse-response, to alter those audio-sources to sound like they would in that physical space. This gives you completely authentic reverb and early-reflections. The more advanced solutions also give good directionality -- e.g. the AstoundSound extension that I mentioned deals with HRTF's, to create the audio-equivalent of "virtual reality" (this is basically another convolution -- convolving the sounds against the impulse-response of a human ear canal to accurately model directional effects, better than surround-sound does!).

 

Even with pre-computed impulse-responses, this is quite processor intensive, which is why it is not yet common. On the next-gen consoles, or AMD GPU's on PC, you've got a completely programmable DSP within the graphics card, which can now be used to implement these effects much more efficiently (the process requires an FFT and an inverse-FFT per sound source, with a width based on the duration of your reverb effect -- stone corridors with long echos are much more expensive than a typical muffled room).

So far, the stuff I've mentioned relies on measurements made of physical places in the real world.

Ideally, you would use ray-tracing within your virtual world to create impulse-responses for your virtual environments on the fly.
I'm not aware of any middleware that does this, but I'm not an audio specialist, so I might just be unaware of such a product.

 

If your environments are fairly static, you could also perform this ray-tracing ahead of time, and "bake" out the impulse-responses and save them to disk, switching between them based on the area where the player is (much like you would when using data sourced from the real world).

There are plenty of academic papers describing how to implement such a ray-tracing system though! There's a few games on the previous-gen and on the new consoles that have implemented this themselves in order to get completely dynamic and physically based early reflections automatically, appropriate for whatever environment the player is in.


Edited by Hodgman, 16 February 2014 - 01:48 AM.


#8 davide445   Members   -  Reputation: 108

Like
0Likes
Like

Posted 16 February 2014 - 08:58 AM

.........
There's a few games on the previous-gen and on the new consoles that have implemented this themselves in order to get completely dynamic and physically based early reflections automatically, appropriate for whatever environment the player is in.

What games are doing that? Maybe they are unsing some engine or package I can use as well.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS