Jump to content
  • Advertisement
Sign in to follow this  
Starfox

Developing a next gen audio api/pipeline

This topic is 3943 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone, I've been working on a new audio api, mainly due to the lack of portable programmable dsp effects in other APIs. I tried to add the programmable part as a bolt-on to OpenAL, but it seemd to be too much hassle, and adds another layer of indirection on platforms that have non-native OpenAL libraries. Working on this, however, I ran into a pipeline-design problem. So far, I have 4 proposed pipeline layouts, so would some audio gurus out there care to take a look and comment? Proposed pipeline layouts so far: TIA

Share this post


Link to post
Share on other sites
Advertisement
Too much like a game of 'spot the difference'. What are the key differences between each approach?

Share this post


Link to post
Share on other sites
The key difference lies in the first half, from the start till the point the samples are sent to the fixed function processing.

In all approaches, source (or stream, more on that later) shaders can read from uniforms and constant buffers, and can read/write to "varying" buffers, that persist between invocations.

In the first approach, a source runs a single source shader, that can sample from N audio buffers (think a standard OpenAL source with more than one buffer bound to it).

In the second approach, the source shader has one set of samples as input, and that set comes form the result of blending samples from multiple audio buffers with a user-defined weight for each buffer.

In the third approach, a source (emitter) consists of N streams, each stream samples from ONE audio buffer, and each runs a possibly different stream shader. The samples outputted from all streams are sent into a blending unit, with a weight for each, and merged accordingly.

The fourth approach is the same as the third, except that the blending unit is replaced with a summing unit (think blend weights = 1). The blend weight in this case can be applied in the stream shader.

Let me know if anything needs more explanation.

TIA

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!