Jump to content
  • Advertisement
Sign in to follow this  
Laval B

Audio System

This topic is 797 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everyone.

 

I have started the development of an audio system that would be mostly oriented toward games be it 3D or 2D. So far I have been mostly concerned with the algorithms for 3D sound i.e. generating a multichannel (one channel per speaker) pcm buffer from multiple single channel sources at different locations relative to a listener, just like OpenAL, FMOD and other libraries do.

 

I have also been working on the real time mixing aspect of these sounds (of course). It's going very well up to that point. I still need to implement Doppler shift as well as distance attenuation and possibly hrtf and binaural filters. This part of the development is very interesting. I'm also learning alot about SSE and AVX instructions.

 

Just as a note, the system will use XAudio2 on Windows and probably ALSA on Unix Operating Systems like OSX/Linux. I don't know yet about mobile or even if i will eventually port it to mobile (which would be great). One of the design goals i have is to use the platform specific audio api minimally only to send pre-process samples to the device so porting will be easier. If it ever becomes decent enough, i might make it an opensource project but i'm not there yet.

 

As of late, i started thinking about how the system would communicate with the host application and the more i think about it, the less i'm sure about it. That's what i would like to discuss. My concerns are all related to multithreading.

 

So far, the application would deal with 4 classes :

  1. AudioSystem is the class used for the initialization/shutdown of the system and the main api for resource management and update.
  2. AudioSource basically represents the configuration of a sound in the scene i.e. position, speed, orientation, area of effect, etc.
  3. AudioBuffer represents the data of a sound. An AudioBuffer can be shared by multiple sources of course.
  4. Listener represents the point in the scene from which the sounds are heard. It has a position, speed, orientation so far.

The system basically has a mixing thread that cycles through the list of sources and prepares buffers for the api to consume when it needs them.

 

A typical real time application would likely have (or at least i have to consider it would) multiple threads working on preparing/updating the scene and each of these threads would update sources, and that is where i'm not sure how to do it. The operations performed by the application are the following :

  1. Add/remove sources from the list (or set the status to paused/stopped).
  2. Update the parameters of sources like speed, orientation and position.
  3. Update the listener's parameters (position, speed, orientation).

I'm trying to think about a way that would not impair the performance of either the application or the mixing thread. I have though about using two lists, one is the "committed" list the mixing thread is working on and the other is the list the application is working on then i could "atomically" swap the two lists ... or something like that. I don't know about locking, it could be good if done properly  i guess. It is clear to me the update of the list must be done as a transaction only once "per frame" and not multiple updates during the frame composition just like the graphics APIs do.

 

With the atomic swap of lists, i'm affraid i could lose updates if one side is too fast so i guess i would need to queue these updates ...

 

Well, that's is basically what i would like to discuss. I'm opened to ideas and suggestions.

Edited by Laval B

Share this post


Link to post
Share on other sites
Advertisement

You're on the right track. In my audio system I have a big update( const Scene& scene, float dt ) method that copies the current state of all listener/source/objects into internal data structures. I'm also simulating sound propagation effects using path tracing, so that computation must be executed as a task on a separate thread. Once the sound propagation impulse responses are computed, I have to update the audio rendering thread with the new data. I do this by atomically swapping the IRs in a triple-buffered setup. You can use a similar strategy - copy your parameters into one end of a triple-buffer, then use an atomic operation to rotate through them. One set of parameters is the current rendering thread interpolation state, another set is the target interpolation state, and the third is where the main thread writes the next set of parameters. The key is to only rotate through the buffers once the rendering thread has finished the previous interpolation operation (requires another atomic variable to signal completion). If the main thread updates the parameters more often than that, the update is just ignored. You only need to update audio information at 10-15 frames/second anyway. Anything faster is overkill perceptually.

Edited by Aressera

Share this post


Link to post
Share on other sites

... One set of parameters is the current rendering thread interpolation state, another set is the target interpolation state, and the third is where the main thread writes the next set of parameters. The key is to only rotate through the buffers once the rendering thread has finished the previous interpolation operation (requires another atomic variable to signal completion). If the main thread updates the parameters more often than that, the update is just ignored. You only need to update audio information at 10-15 frames/second anyway. Anything faster is overkill perceptually.

 

Thank you very much for the answer, i was wondering how much i should queue vs update rate.

 

I've just done some searching on Google about the method you're using for sound propagation, it's facinating and more sophisticated then the panning algorithm i'm using. Thanks again for the tips.

 

Just out of curiosity, do you have a fixed compile-time limit on the number of sources you can use ?

Edited by Laval B

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!