Jump to content
  • Advertisement

Getting 32 independent audio channels out of a game engine

Recommended Posts


I am an audio researcher developing new audiovisual technologies and currently interested in new applications for games, especially in areas of VR arcades, large immersive spaces, 360 degree installations and even escape rooms.

I am wondering if anyone has any ideas how to get 32 independent channels (or more) of audio output in real-time from a game engine like Unity that can be spatially mapped to XY coordinates of virtual objects in a screen, or the XYZ coordinates for a spatial enclosure, when most game engines only allow for fixed pre-defined output formats such as stereo, 5.1 or 7.1.

I have an executive summary of the technology online at (the link also includes my email address): http://bit.ly/pixelphonics



Share this post

Link to post
Share on other sites

I'd imagine a lot of sounds APIs work like this .. play sound at 3d position blah. Listener at 3d position blah.

They do exactly as you propose and mix the sounds according to the speaker configuration they support. It would in theory be very easy for them to output the channels individually because they do this internally, but whether they support this I don't know.  If you want to find out whether you can get access to this info, you need to look at the docs of the various sound APIs that might be used in the games you are interested in. Most likely to have success is e.g. OpenAL, because it is open source.

You may also be able to make a compatible / shim layer that intercepts calls to the sound API and does your own stuff with them. But if the game doesn't use a shared library for sound then all bets are off.

You could do it as proof of concept with an open source game, that is most likely to succeed as there are a number of hurdles to overcome which you may not have the technical chops to do (the fact you ask the question suggests this).

Overall I'd question how really 'innovative' this whole thing is. That's not to say it's not worth exploring, but it's not in any way 'innovative'. The whole media-sound business is about exactly as you describe, they've just figured out it works better for them to have a few speakers and use the balance between them to position the sound (as we only have 2 ears). Having instead a bunch of speakers placed around an area or surface is probably the first thing they tried, and there are a number of disadvantages to this as I'm sure you are aware.

Share this post

Link to post
Share on other sites

In Unity I suspect you could call GetOutputData on each AudioSource, but then you're left performing all your attenuation and panning yourself. I don't believe there is a way to create an arbitrary number of output channels based on different listening and output positions, which is a shame.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!