Jump to content
  • Advertisement
Sign in to follow this  
Medo Mex

Using XAudio2

This topic is 2078 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to use XAudio2

 

The idea I got is that I should have the following in XAudio2:

- Listener

- SourceVoice

 

In FPS game, "Listener" should be the player, while "SourceVoice" can be any other model making sound.

 

Am I right?

 

How do I setup multiple SourceVoice and play them all during the game? If a helicopter is getting close, the player should hear it, at the same time if the player is taking fire, the player should hear the sound of the bullet hitting the wall for example at the same time.

 

The player should hear EVERYTHING around at the same time.

Share this post


Link to post
Share on other sites
Advertisement

The idea I got is that I should have the following in XAudio2:
- Listener
- SourceVoice
In FPS game, "Listener" should be the player, while "SourceVoice" can be any other model making sound.
Am I right?

Sort of, but not really. As much as we might want XAudio2 to be a nice object oriented library it is simply not the case. You can set up multiple source voices to play multiple sounds simultaneously, but there is no intrinsic relationship between source voices and 3D positioning. It is probably better to think of SourceVoices as very basic PCM streams. The Listener and the Emitter are not really objects in a modern stateful sense, but collections of parameters that are passed to X3DAudio's X3DAudioCalculate method.

How do I setup multiple SourceVoice and play them all during the game? If a helicopter is getting close, the player should hear it, at the same time if the player is taking fire, the player should hear the sound of the bullet hitting the wall for example at the same time.

The player should hear EVERYTHING around at the same time.


This example shows how to setup simple 3D audio in XAudio2: http://msdn.microsoft.com/en-us/library/windows/desktop/ee415798%28v=vs.85%29.aspx
It is a serious PITA to get right. XAudio2 is quirky and in a lot of ways half-baked. If this is your first time working with 3d audio you might want to start with a higher-level library (I have heard FMOD is solid though not free, BASS is fine, but is just wrapping the older DirectSound apis).
Hope this helps.

Share this post


Link to post
Share on other sites

I'm not sure about OP's notion of "at the same time" but I'll try.

The whole deal about XAudio2 is that we should connect the various sources to a tree-like structures.

There are three kinds of voices.

  • Source voices are used to model sound emitters, those are connected to
  • Submix voices. They "blend" the various sources together. Nice submix voices I have in my systems are "music", "fx", "voice", "think". I use them for volume control.
  • Mastering voice. It's "the output".

MSDN Xaudio2 voices.

 

Once this tree is built, we don't play "the voices". We tick this tree instead, XAudio2 evolves the whole tree for us and active voices evolve their state accordinlgy. So what do we call to tick XAudio2?

Nothing! It does it everything by itself on its own thread.

This is nice and cool but being asyncronous, this implies your voices, supposed to be syncronized could start in different engine ticks. To solve this issues, look in "operation groups" and dispatch them using CommitChanges.

Edited by Krohm

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!