Several questions related to OpenAL

Started by
7 comments, last by Gorax 18 years, 5 months ago
I've been reading up on OpenAL documentation and tutorials, but I'm still left with a lot of questions that I haven't been able to find the answers to. In particular... 1. ALUT is depricated: so what's a good way to load WAV files? After wondering why no calls to ALUT functions were in the official documentation but used in the tutorials I found, I finally figured out that ALUT is depricated after reading a mailing list. So what do you use to load WAV files in its place? Do you use a library, or just write your own decoder? (I'm looking for a good cross-platform solution here). 2. Is there no single call to stop/pause all current playing audio? I'm surprised that I couldn't find a call that would stop/pause all current audio sources that are playing. Is there really no function to do this, meaning I'd have to iterate through all my active sources, figure out what's playing, and stop/pause those on an individual basis? 3. Changing global volume levels? This goes along with question #2 kind of. To increase/decrease the volume levels of all your audio, do you have no other option than iterating through all your buffers and changing the gain for them appropriately? 4. Is there a limited number of created sources you can have at one time? I read an anonymous post somewhere that said that the number of sources you can create is very limited (he said he could only have a maximum of 31 at a time or something in his post). Is this really true, and if so why? I don't know much about audio devices/programming so I don't quite understand the underlying mechanisms and hence the reason for this limitation, if it exists (I have some guesses why this would be though). 5. If the answer to the above is true, what about multiple contexts? For example, if I can only have 31 active sources at a time, is that on a global basis or a per-context basis? 6. Can the number of channels of an audio file be defined at run-time? For example, suppose I have a dual-channel OGG file I want to load in. Could I do something like `alBufferf(AL_CHANNELS, 1);` to make it act like single-channel audio, and thus also allow 3D spatialization effects to apply to it? (I'm pretty sure the answer to this question is no, but I haven't found any solid evidence to convince myself that I'm right). Phew, well that's a list for starters. I may have some more questions later on and I'll post them to this thread as they come to me. Thanks in advance! [smile]

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Advertisement
Quote:1. ALUT is depricated: so what's a good way to load WAV files?

Is it now? All the demo files for the newest OpenAL SDK (1.1) still use alutLoadWAVFile/alutUnloadWAV, so I'd say, why not? I've never had problems with it when I used it back in 1.0. That is if you really wanted to use .wav files in your application/game. Ok, so if you really don't want to use that function, a good way to load and use WAV files is shown in the "OpenAL 1.1 SDK\samples\playstream\SourcePlayStream.cpp" file, it gives you the WAVE_Struct and then loads in the file. Of course in that example, it loads portions for streaming purposes, but with very little changes, you can make it load the entire file at once. Very straight forward, but if you need help with that let us know.

Quote:2. Is there no single call to stop/pause all current playing audio?

No there's not, but you do not need to go though all that trouble of seeing which is playing and which is not. All you need to do is stop/pause *all* sources at once, then call the geterror function afterwards to clear the error for when you stopped/paused a source that was not playing. As per my other post helping someone with OpenAL, that covers the system that I promote when using OpenAL. With that, it'd be very easy to write that functionality in. having done it myself as well, I'll let you know, it's so trivial, that there's no reason to have an OpenAL function to do it, just because of how OpenAL is designed.

Quote:3. Changing global volume levels?

Same as number 2. The main reason there are no global functions provided is because OpenAL does not store, nor cares, how you store your sources at or how many you have. So you can think of it as a C-styled design, where you have to make it work in an OOP style. Once again, very trivial to implement.

Quote:4. Is there a limited number of created sources you can have at one time?

Yes, read my other link. I've found out that 16 works well for jut about evveryone. I know my computer can do 32, and others can do 48 and one person could even do 64! If you want to be safe, 16, but I know you can easily make a dynamic system that will work for each computer. That process is just allocate one source, then check for an error, and repeate until you get an error. Then you just store all those sources in an array and that's it.

Quote:5. If the answer to the above is true, what about multiple contexts?

You can only have one context as for as I know. I've tried to mess with having more than one, and had nothing but crashes and falures. You can play around with it some more if you want, but I was unsuccessful.

Quote:6. Can the number of channels of an audio file be defined at run-time?

Not sure! I worked with OpenAL for about 3-5 months beginning of this year, but I've never used that functionality before. I say, try and see [smile]

Quote:Phew, well that's a list for starters. I may have some more questions later on and I'll post them to this thread as they come to me.

We'll be around for when you do [wink]
Thanks a lot Drew! That is the most helpful reply I've ever received on these forums. [grin] Wow, i'm glad you told me all that because it completely changes the way I need to think about designing this new audio engine. I thought that sure there would be a limit on the number of audio sources that you could have playing simultaneously, but never would have imagined that there is such a limited number in just creating them. This also eliminates my need for multiple contexts as well (well, almost). But I think I have a few good ideas for abstracting things enough so things are easy to manage, borrowing some of your ideas from that other post. [wink]


1) After creating a device and context, create sources until you can't anymore (you get an error) and stick those in a vector, save for one which will serve as a source for music.

2) Allow the user to create instances of a "SoundObject" class, which holds the OpenAL buffer data. Initially, no buffers have sources allocated to them.

3) When the user wants to play the audio in a SoundObject, first we check if we have a valid source assigned to it. If not, we call the AudioManager (a singleton class representing the audio engine core) to get a source for it. (We'll assume it always returns a source that isn't playing, since I doubt we'll need more than 15 active sounds playing in my game).

4) Keep a map of SoundObject pointers, with the filenames of the data they reference as the key. This way we'll be sure to never load a sound more than once.


I was also thinking of keeping a stack of "virtual" contexts, which do nothing more than save the current listener attributes and which sources point to which buffers. This way when we do a game state transition (from map exploration to a battle for example), when we return to the map mode we can restore the audio state as we left it. I'm wondering if this would do any good though, because if I return to map mode and try to play a sound, I'll just see that it doesn't have a source anyway and grab one. It might be useful as far as saving/restoring the listener properties and distance model though.


There's also a question of how to assign sources to buffers. The simpest way would be to just keep a counter that iterates through and wraps around a vector, but that would be a performance hit because we could be swapping frequently used sounds in and out unneccessarily. I think the best algorithm would be to use an LRU scheme. So whenever a source gets played, you change its timestamp to now and when you need to evict a source, you choose the oldest one.


Ok I'm done with my babbling about that. [smile] Here's another question. With all this source sharing being done, unless you're careful you're going to be accidentally sharing source properties of the buffer which last held the source. For example, if buffer A sets the source position to {xA, yA, zA} and later buffer B gets control of the same source, that source's position will still be {xA, yA, zA}. So, to prevent this do you either:

1) Reset all source properties to an inital state once a new buffer gains control of it? (Seems like it would be a costly thing to do!)

2) Delete and then re-generate the source to do an easy reset of the source properties to its default values?

3) Something else I haven't thought of?


Thanks once again. If I was able to give you rep more than once for that post, I would have. [lol]

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Sorry for the double post, but I just thought of something. Would changing the gain attribute of the listener object be equivalent to changing the global volume? If not, then what is listener gain for?

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Quote:Original post by Roots
1) After creating a device and context, create sources until you can't anymore (you get an error) and stick those in a vector, save for one which will serve as a source for music.

Allow the user to create instances of a "SoundObject" class, which holds the OpenAL buffer data. Initially, no buffers have sources allocated to them.

3) When the user wants to play the audio in a SoundObject, first we check if we have a valid source assigned to it. If not, we call the AudioManager (a singleton class representing the audio engine core) to get a source for it. (We'll assume it always returns a source that isn't playing, since I doubt we'll need more than 15 active sounds playing in my game).

4) Keep a map of SoundObject pointers, with the filenames of the data they reference as the key. This way we'll be sure to never load a sound more than once.


Sounds great! You can also expand your system if you really wanted to get creative to have a set of channels reserved for 'important' sounds. Basically, if an important sound needs to play, it wikk kick out one of the sounds that are not as important, if it should happen there is overlap with 16 sources.

Quote:I was also thinking of keeping a stack of "virtual" contexts, which do nothing more than save the current listener attributes and which sources point to which buffers. This way when we do a game state transition (from map exploration to a battle for example), when we return to the map mode we can restore the audio state as we left it. I'm wondering if this would do any good though, because if I return to map mode and try to play a sound, I'll just see that it doesn't have a source anyway and grab one. It might be useful as far as saving/restoring the listener properties and distance model though.


That sounds good, but also consider this. If you don't make your sound class a singleton, and instead, you could try making two audio managers, representing the exploration, then the battle. Then, in your game logic, you just use a pointer to a sound manager rather than a specific instance of an object, so you can easily switch between using both of them by simply reassigning the pointer. Something you could try, I'm not sure how well it'll work.

Quote:There's also a question of how to assign sources to buffers. The simpest way would be to just keep a counter that iterates through and wraps around a vector, but that would be a performance hit because we could be swapping frequently used sounds in and out unneccessarily. I think the best algorithm would be to use an LRU scheme. So whenever a source gets played, you change its timestamp to now and when you need to evict a source, you choose the oldest one.

What I had done a while back was made a function, GetNextSource() that simply iteratres though the sources and returns the 1st avaliable one. If none was avaliable, then based on settings, it would either buffer the sound to the queue, or just drop that sound altogether. I'd say do that first, then profile the performance, then see if you can speed it up with your LRU idea. You always want to have a method to compare against.

Quote:Here's another question. With all this source sharing being done, unless you're careful you're going to be accidentally sharing source properties of the buffer which last held the source. For example, if buffer A sets the source position to {xA, yA, zA} and later buffer B gets control of the same source, that source's position will still be {xA, yA, zA}. So, to prevent this do you either:

1) Reset all source properties to an inital state once a new buffer gains control of it? (Seems like it would be a costly thing to do!)

2) Delete and then re-generate the source to do an easy reset of the source properties to its default values?

3) Something else I haven't thought of?


Well you definitly do not wanna do #2, remember we already created all our sources in the beginning [wink] OpenAL is weird in that sense, because you can generate all your sources, delete them all, and NOT be able to generate them again. I know I had that problem when I last used it, you could see if it still exists or not. If it doesn't happen for you, make sure to test across various computers to make sure it's not a hardware issue.

I'd say use #1 and #3 hehe. What you can do is this, add an extra flag to the sound object that is doReset. It's initially set to false. If you call a function that modifies the source, then you set that value to true. Now before you change sources, or reassign it, or you stop it, thus remving the buffer from it, you can have a if(doReset){ reset_source(); } That will take away from that performance hit you are talking about needlessly resetting a source, and will only leave the necessary actions in.

Quote:Would changing the gain attribute of the listener object be equivalent to changing the global volume? If not, then what is listener gain for?


Hmmm, I wasn't sure so after a little searching, here's something interesting:
Quote:
Description: GAIN defines a scalar amplitude multiplier. As a Source
attribute, it applies to that particular source only. As a Listener
attribute, it effectively applies to all Sources in the current Context.
The default 1.0 means that the sound is un-attenuated. A GAIN value of 0.5
is equivalent to an attenuation of 6 dB. The value zero equals silence (no
output). Driver implementations are free to optimize this case and skip
mixing and processing stages where applicable. The implementation is in
charge of ensuring artifact-free (click-free) changes of gain values and is
free to defer actual modification of the sound samples, within the limits
of acceptable latencies.

GAIN larger than 1 (amplification) is permitted for Source and Listener.
However, the implementation is free to clamp the total gain (effective gain
per source times listener gain) to 1 to prevent overflow.


So you might have to play around with that to see if it will work correctly or not. I'm not sure what has been fixed or updated in 1.1 yet, so so also take a look at the change log for anything that might be addressed. Goodluck!
Quote:Original post by Drew_Benton
Sounds great! You can also expand your system if you really wanted to get creative to have a set of channels reserved for 'important' sounds. Basically, if an important sound needs to play, it wikk kick out one of the sounds that are not as important, if it should happen there is overlap with 16 sources.


Yeah I thought about that, but I don't think I want to assign any priority to certain sounds, because that just becomes more work for the API user, and plus it is very difficult to gauge how important sounds are relative to one another. If we (somehow) get 16 sources playing at a time and another piece of audio requests to play that doesn't have a source, I just won't play it.


Quote:
That sounds good, but also consider this. If you don't make your sound class a singleton, and instead, you could try making two audio managers, representing the exploration, then the battle. Then, in your game logic, you just use a pointer to a sound manager rather than a specific instance of an object, so you can easily switch between using both of them by simply reassigning the pointer. Something you could try, I'm not sure how well it'll work.


Yeah that's an idea, but I'd rather stick with only having one audio manager singleton class. That's just the way the rest of our engine works and I'd like to stick to convention. [smile]

Quote:What I had done a while back was made a function, GetNextSource() that simply iteratres though the sources and returns the 1st avaliable one. If none was avaliable, then based on settings, it would either buffer the sound to the queue, or just drop that sound altogether. I'd say do that first, then profile the performance, then see if you can speed it up with your LRU idea. You always want to have a method to compare against.


Yeah I might go ahead and do that, but if you think about it LRU really shouldn't be that expensive to implement. When a source is either played or assigned to a buffer, that source becomes the first element of the vector (MRU). When a sound buffer wishes to play and doesn't have a source assigned to it, it takes the last element of the vector (LRU), unless that element is playing (and if it is playing, usually that means that every other source is playing).

Quote:
Well you definitly do not wanna do #2, remember we already created all our sources in the beginning [wink] OpenAL is weird in that sense, because you can generate all your sources, delete them all, and NOT be able to generate them again. I know I had that problem when I last used it, you could see if it still exists or not. If it doesn't happen for you, make sure to test across various computers to make sure it's not a hardware issue.

I'd say use #1 and #3 hehe. What you can do is this, add an extra flag to the sound object that is doReset. It's initially set to false. If you call a function that modifies the source, then you set that value to true. Now before you change sources, or reassign it, or you stop it, thus remving the buffer from it, you can have a if(doReset){ reset_source(); } That will take away from that performance hit you are talking about needlessly resetting a source, and will only leave the necessary actions in.


Awesome, that sounds like a great solution. [grin]



Quote:
Hmmm, I wasn't sure so after a little searching, here's something interesting:
Quote:
Description: GAIN defines a scalar amplitude multiplier. As a Source
attribute, it applies to that particular source only. As a Listener
attribute, it effectively applies to all Sources in the current Context.
The default 1.0 means that the sound is un-attenuated. A GAIN value of 0.5
is equivalent to an attenuation of 6 dB. The value zero equals silence (no
output). Driver implementations are free to optimize this case and skip
mixing and processing stages where applicable. The implementation is in
charge of ensuring artifact-free (click-free) changes of gain values and is
free to defer actual modification of the sound samples, within the limits
of acceptable latencies.

GAIN larger than 1 (amplification) is permitted for Source and Listener.
However, the implementation is free to clamp the total gain (effective gain
per source times listener gain) to 1 to prevent overflow.


So you might have to play around with that to see if it will work correctly or not. I'm not sure what has been fixed or updated in 1.1 yet, so so also take a look at the change log for anything that might be addressed. Goodluck!


Hmm interesting. Okay, I'll play around with this once I get the rest of my engine put together and see what happens (and post what I find back here). Thanks for finding that! [wink]

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Okay, I'm ironing things out in the design now and things are coming along well, but I have one feature that I'm not sure if it would be a good thing to have. First let me ask a simpler, faster question though. [wink]

Question (1): Ogg looping in OpenAL
My composers want to be able to create their music so that it plays an initial portion until the end, then loops back to some part in the middle of the song and loops from that point until the end over and over. For example, imagine we have the following song.

               A      B                CMusic: {start} ------------------------- {end}


What we want to do is have the song play like this:
ABC
BC
BC
BC
...

I didn't read any easy way to do that with OpenAL in the documentation. Is this something that can be accomplished with the Vorbis Ogg libraries? Like, could I set a callback function so that when the music sample finishes playing, it sees "Oh, okay I'm supposed to now seek to position B and keep playing". If it's not possible to do there either, I have enough functionality already to implement this myself I guess, by setting a property on a music source and to constantly query it during play so that I can seek and resume play when it gets to the stopped state. Just curious. [smile]


Question (2): Saving and Restoring Audio State; A Worthy Investment?
The issue is with saving a snapshot of the audio state. The way my game engine works (in general) is that there is a game mode stack, where the top-most stack element is the active game state. Game modes may be for maps, battles, world exploration, etc., and as you might guess each will want their own audio state. So what I am working on implementing is a call that saves the audio state for when that game mode state change happens. The things saved are:

- The listener attributes
- The attenuation distance model
- A list of which sources are assigned to which buffers
- All source attributes
- For sources which were playing, the position they were playing at


Then when the game mode that saved the state is restored as the active game mode on the stack, it can restore all this state information. The advantage to this is that we can resume the audio of that game mode as if it was never interrupted. And also by saving the state, we allow the next game mode to have full-use of the maximum number of available sources.


But I think I might be doing overdoing this here. That seems like *a lot* of information to save to me (the delay could of course be hidden by loading times between game modes), and I'm also wondering if it makes any sense. If I just stop all audio from playing when a game mode change happens, then the next game mode will still have all the sources available. Its just when returning to the original state, I'd have to re-assign sources and their properties and begin playing audio from the starting position again.


Hmm I dunno. I'm just thinking out loud here. [smile] But if you have any comments on that idea I welcome it.

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Here's something interesting I found out. Unlike what has been said around these forums, I can create thousands of sources in my OpenAL app at a time, and never have a problem with creation. I was totally like WTF, so I asked in the #openal channel and a person there told me that "yeah, that's more of a windows restriction" (I run Debian Linux).


Stupid windows makes my job of writing this engine so much harder. Argh. Well I guess instead of trying to create sources until I run out, when I get up to 128 or so, that should be enough. This engine is going to be performing much better on Linux than on Windows. [grin]

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Quote:Original post by Roots
Question (1): Ogg looping in OpenAL
               A      B                CMusic: {start} ------------------------- {end}

What we want to do is have the song play like this:
ABC
BC
BC
BC
...


Easy enough to do, just create 2 buffers (AB and BC), queue both (ABC) then queue BC when the number of queued buffers is 0 (when BC's playing). Obviously you'd want the BC buffer to be at least 3 seconds long so that if there's any delays in your app, you don't have to tell the source to start playing again. Alternatively, you can add both, then when the number of queued buffers is 0, then remove the processed buffer(s) (AB in this case) and set the source to loop (although I'm not too sure how this method'd turn out).

Quote:
Question (2): Saving and Restoring Audio State; A Worthy Investment?


Most games avoid the problems you've raised by ensuring the game is either saved in certain places, or everything that was happening before the save was done, is basically disregarded (explosions that occur mid-save don't show up when the game is loaded). Also, if there's explosions going off mid-save that'll kill you when it's done saving, there's no way you'd not end up dead if the explosion continued after loading, so chances are somebody's going to fall victim to it somewhere along the line. If you could find a good reason to use it, you could use that to justify doing it, but in most cases, it would be unnecessary, and probably annoying in the long run. Then again, you could do it just to prove you can.

This topic is closed to new replies.

Advertisement