Sound in complex environments
Hi,
About sound... how do games manage sound in 'complex' environments? I have scenes with lots of walls, chambers, doors, air ducts, outside area's, and so on. With OpenAL and EAX I can modify the sound by looking at the sector where the sound was triggered (echo's in a bathroom for example). But how about obstacles and adjusting the volume? For example, another guy is walking on a wooden floor nearby the player. There is a wall in between, so the footsteps should be less noisy. Or maybe there is an airduct between the 2 chambers, so the noise from the other room should have a 'metalic sound'.
I was thinking about using the sectors and portals that are also used for the rendering part. I could tell for each portal how much sound will come through (big open hole = 100%, thick glass or a closed door reduces with 50%, etc.). This also allows me to adjust the volume reduction if a door closes/opens. Disadvantage is that not all sectors are directly connected via a portal, while there might be only a very thin wall in between.
And then there is AI. Enemies should listen to the noise the player makes. So far I have a simple model where I check the distance between the sound source and all its listeners. If its < "range", the sound will be heared. Not very realistic of course. If there is a 5 meter concrete wall in between, the enemy can still hear my little farts. Other issue is the speed. What if there are 200 enemies listening? Checking with all of them can be quite some work, especially if I would use a sector/portal tree to trace the sound 'path'... Any ideas?
Greetings,
Rick
For both performance and realism I split my game levels into simple sectors, and sounds have a sector radius. Only players in a sector radius will hear the sound. Its faster than doing a distance check to every object for every sound.
As for the rest of your question, altering sounds based on environment is pretty complex and game-dependent. A good starting place is zone-based sound settings. Most level editors like UT, HL, Quake, that split levels into zones allow you to alter sound settings in each zone. A good way to do underwater, outdoor echoes, tunnels, etc..
As for the rest of your question, altering sounds based on environment is pretty complex and game-dependent. A good starting place is zone-based sound settings. Most level editors like UT, HL, Quake, that split levels into zones allow you to alter sound settings in each zone. A good way to do underwater, outdoor echoes, tunnels, etc..
If I were you, I'd first determine the accuracy that you actually need. A distance-only check is not the most realistic approach, but is that a problem if the results are fair enough (that is, the difference goes unnoticed by most players)? Combined with a modifier for direct-line-of-sightness you'll probably have a pretty decent solution.
Anyway, if you go with your portal-based approach, will the thin-wall case really be an issue? If it's not directly connected via portals to the other side, do you even want sounds to affect enemies behind that wall? Will it result in interesting situations, gameplay-wise, or do you get a horde of enemies storming out of the house, so that, after the player has defeated them, there's no action anymore, no pacing, no tension, nada? Or, if it is important, isn't it possible to create audio-only portals?
As for having 200 enemies, how likely is that scenario? You seem to be going for smart enemies, so you'll probably don't need so many. Either way, some high-level culling (sectors, octree or grid nodes, etc.) should be sufficient to keep the number of checks down to reasonable numbers.
@FippyDarkpaw: I can't speak for Quake and UT, but HL didn't do zone splitting for it's sound effects. It used ambiance modifier entities that simply changed the ambiance to their selected type as soon as the player got close enough to that entity. Well, you could call these spherical zones but it's actually more a trigger-based system.
Anyway, if you go with your portal-based approach, will the thin-wall case really be an issue? If it's not directly connected via portals to the other side, do you even want sounds to affect enemies behind that wall? Will it result in interesting situations, gameplay-wise, or do you get a horde of enemies storming out of the house, so that, after the player has defeated them, there's no action anymore, no pacing, no tension, nada? Or, if it is important, isn't it possible to create audio-only portals?
As for having 200 enemies, how likely is that scenario? You seem to be going for smart enemies, so you'll probably don't need so many. Either way, some high-level culling (sectors, octree or grid nodes, etc.) should be sufficient to keep the number of checks down to reasonable numbers.
@FippyDarkpaw: I can't speak for Quake and UT, but HL didn't do zone splitting for it's sound effects. It used ambiance modifier entities that simply changed the ambiance to their selected type as soon as the player got close enough to that entity. Well, you could call these spherical zones but it's actually more a trigger-based system.
The sound module is not directly for a specific scenario, as I'd like to create a more general solution instead of re-programming the sound system again and again for my hobby projects. So, a command & conquor kinda like thing could use it as well (that's why I mentioned 200 enemies), although it can do with a far more simple model in that case.
However, the first practical usage will be in a first person game with a small number of smart enemies. Sneaking is an important element, that's why I'd like to pay extra attention to this. It's annyoing that the enemies have ultra sensitive eyes and ears in some games. Instead of attacking enemies, you should try not to alert them (push over a vase, slam a door, walk on an old wooden floor). If the enemy hears noise, he will try to track you. On the other hand, the player needs a realistic sound model as well. In order to flee for the enemies, its important that I can hear them walking for example, also if they are not in the same room.
"Audio portals" sound interesting, although I'm not sure yet how to create those. I mean, each and every wall is a potentional portal. Maybe instead I could define the connection between 2 sectors (and they don't have to be directly connected via a door). Sector A to B has a reduction of 30% on the volume... something like that. Basically each sector would get a list of "nearby" sectors and a volume reduction value, and eventually an sound modifier effect. When a sound is spawned in sector X, I only have to check for listeners inside the connected sectors. Only problem is that it's hard to tell how many sectors to connect(in which radius?)... Most of the sounds can only travel 1 or 2 sectors, but gunfire or explosions could have a far wider range. Maybe I should allow 2 types of sounds:
A. Soft/Medium sounds --> look via a sector relation table
B. Loud sounds --> just do the good old distance check
Portals like doors could increase/decrease the volume. When walking from one chamber into another can cause a sudden change in the volume, but ok... I don't think there is much I can do about that, and neither do other games I guess. Don't know, this is the first time I'm looking deeper into sound.
Thanks for helping!
Rick
However, the first practical usage will be in a first person game with a small number of smart enemies. Sneaking is an important element, that's why I'd like to pay extra attention to this. It's annyoing that the enemies have ultra sensitive eyes and ears in some games. Instead of attacking enemies, you should try not to alert them (push over a vase, slam a door, walk on an old wooden floor). If the enemy hears noise, he will try to track you. On the other hand, the player needs a realistic sound model as well. In order to flee for the enemies, its important that I can hear them walking for example, also if they are not in the same room.
"Audio portals" sound interesting, although I'm not sure yet how to create those. I mean, each and every wall is a potentional portal. Maybe instead I could define the connection between 2 sectors (and they don't have to be directly connected via a door). Sector A to B has a reduction of 30% on the volume... something like that. Basically each sector would get a list of "nearby" sectors and a volume reduction value, and eventually an sound modifier effect. When a sound is spawned in sector X, I only have to check for listeners inside the connected sectors. Only problem is that it's hard to tell how many sectors to connect(in which radius?)... Most of the sounds can only travel 1 or 2 sectors, but gunfire or explosions could have a far wider range. Maybe I should allow 2 types of sounds:
A. Soft/Medium sounds --> look via a sector relation table
B. Loud sounds --> just do the good old distance check
Portals like doors could increase/decrease the volume. When walking from one chamber into another can cause a sudden change in the volume, but ok... I don't think there is much I can do about that, and neither do other games I guess. Don't know, this is the first time I'm looking deeper into sound.
Thanks for helping!
Rick
Another idea might be to use a deferred renderer to draw a cube map around the source. You could then use z to test if there's something between the source and the listener and make the sound more muffled. Material properties could be used to transform the sound in some way, like make it more metallic or cut off higher frequencies. I haven't done audio programming either, but this is something I would try.
Quote:Original post by spek
The sound module is not directly for a specific scenario, as I'd like to create a more general solution instead of re-programming the sound system again and again for my hobby projects. So, a command & conquor kinda like thing could use it as well (that's why I mentioned 200 enemies), although it can do with a far more simple model in that case.
I don't see why a RTS should use the exact same sound system as a FPS. The underlying core could be the same (playing sound), but portal-based sound checks sound quite specialized to me already (and not really in the realm of actually playing sound - this is game-specific code that affects how sound will be played, and how AI will react to it). Code reuse is a good thing, but don't do it where it doesn't make sense. Besides, if this is the first time you're doing more with sound, then use this project to explore this particular area. If you're still unfamiliar with something, you can't really develop a generic solution. That's something that takes experience (prototyping could be useful here!).
Quote:However, the first practical usage will be in a first person game with a small number of smart enemies. Sneaking is an important element, that's why I'd like to pay extra attention to this.
So, focus on those aspects. Don't try to generalize your sound (or actually, sound reaction) module, because it won't be used in a generic situation here.
Quote:"Audio portals" sound interesting, although I'm not sure yet how to create those. I mean, each and every wall is a potentional portal.
If you want it to apply to every wall, then my suggestion doesn't make much sense, because you'll be 'portalizing' half your level, at which point it's easier to just trace a ray and check how much air there is between the sound source and the listener, and what kind of material is obstructing them.
Of course, if the two are too far apart, there's no need to do these extra checks because the listener won't hear anything anyway. Either way, are you familiar with high-level optimizations such as space partitioning? This would be an excellent situation to apply them, to reduce the number of potentially expensive sound-listener checks.
Just a side-note: if a RTS would require such detailed sound reacting behavior, it could be useful to make squads listeners, rather than individual soldiers.
Hi again,
I know methods like Octrees, but I haven't use them in combination with sound so far. Maybe I'm wrong, but do you mean I could do raycasting for sound? Maybe its not such a bad idea at all, since chances are that the listener count is really small. And with a couple of simple checks we can exclude far away listseners...
There are a couple of problems though. Since the world will be pretty big, I'll have to share the graphics data with the sound data. So far the sound system is a DLL that can be used with any project. That is still possible, but the 'host program' has to provide specialized 'sound-raycast' functions in that case. Which makes the code a little bit more spaghetti. If possible, I'd like to keep the sound module as stand-alone as possible. Loading that big world again in the sound module would be a waste though. It's either dirty-but-efficient, or clean-and-memory_consuming programming...
Another little in-depth problem could be the raycasting itself. Basically I'll have to measure the air/solid balance between point A and B. But in my particular case its quite hard to tell what is air and what is solid. Scenes are loaded from 3D files that also allow non-volumetric shapes (a quad for a window or fence for example). When hitting a polygon, its not automatically said that I enter "solid space". I could identify that with a material setting though.
Hmm, interesting. Say, do commercial games also use such complex models? I guess not, but then again, I really have no idea of what they do use. My idea for my game has not really been made so far either, so I guess I'm slightly in unexplored space here :) Have to be carefull not being too ambitious and finally create nothing though.
@antistar
Excuse me, I first thought you posted at the wrong place. Deferred renderers, cubeMaps :) That's a creative way to look at it! Well, my renderer is very busy already, so rendering even more for a firing machinegun is probably not a good idea. But besides of that, unlike visual stuff, sound can travel through walls. If I trigger an explosion, I would have to draw a big part of the surrounding world with transparency and depth peeling enabled... It's certainly an interesting approach, but I think its too slow for practical usage right now.
Greetings,
Rick
I know methods like Octrees, but I haven't use them in combination with sound so far. Maybe I'm wrong, but do you mean I could do raycasting for sound? Maybe its not such a bad idea at all, since chances are that the listener count is really small. And with a couple of simple checks we can exclude far away listseners...
There are a couple of problems though. Since the world will be pretty big, I'll have to share the graphics data with the sound data. So far the sound system is a DLL that can be used with any project. That is still possible, but the 'host program' has to provide specialized 'sound-raycast' functions in that case. Which makes the code a little bit more spaghetti. If possible, I'd like to keep the sound module as stand-alone as possible. Loading that big world again in the sound module would be a waste though. It's either dirty-but-efficient, or clean-and-memory_consuming programming...
Another little in-depth problem could be the raycasting itself. Basically I'll have to measure the air/solid balance between point A and B. But in my particular case its quite hard to tell what is air and what is solid. Scenes are loaded from 3D files that also allow non-volumetric shapes (a quad for a window or fence for example). When hitting a polygon, its not automatically said that I enter "solid space". I could identify that with a material setting though.
Hmm, interesting. Say, do commercial games also use such complex models? I guess not, but then again, I really have no idea of what they do use. My idea for my game has not really been made so far either, so I guess I'm slightly in unexplored space here :) Have to be carefull not being too ambitious and finally create nothing though.
@antistar
Excuse me, I first thought you posted at the wrong place. Deferred renderers, cubeMaps :) That's a creative way to look at it! Well, my renderer is very busy already, so rendering even more for a firing machinegun is probably not a good idea. But besides of that, unlike visual stuff, sound can travel through walls. If I trigger an explosion, I would have to draw a big part of the surrounding world with transparency and depth peeling enabled... It's certainly an interesting approach, but I think its too slow for practical usage right now.
Greetings,
Rick
Quote:Original post by spek
I know methods like Octrees, but I haven't use them in combination with sound so far. Maybe I'm wrong, but do you mean I could do raycasting for sound?
No. Let me put this straight: how enemies react to sound has nothing to do with how the sound is actually played, e.g. how it comes out of your speakers. All you need to tell them is where that sound started and how loud it is. This is separate from actually playing the sound.
So, a grenade explodes. It tells the sound system to play a sound. It also tells the in-game sound reaction system that it's playing a sound at that location, with that volume. The sound reaction system checks what listeners are close enough to possibly hear it (it could use an octree or grid to quickly determine who is nearby). For those listeners, it performs a check, tracing a ray between them and the sound position, performing some logic that takes walls etc. into account, and then it informs those listeners of the sound event, and how loud it is to them (and possibly also what kind of sound) - if, of course, there's still some volume left.
Note that you can leave out the actual sound playing and still have enemies react correctly to sound events. Also note how you can play sounds without automatically having enemies react to it: just don't inform the sound reaction system.
Quote:There are a couple of problems though. Since the world will be pretty big, I'll have to share the graphics data with the sound data.
Why would you need to share that data? What does the sound have to do with the graphics data? Only the sound reaction code needs to be informed about sound events (but, as I explained above, that doesn't mean it needs to know about the sound system) and it needs to perform some raytracing (which it can likely delegate to the collision system).
As for collisions, using graphics data for that may not always be such a good idea. Usually, this data contains quite a lot of detail - bushes, grass, pipes, and so on. This means you could have to check against a lot of polygons. This also means that players and enemies could be blocked by tiny stuff. Often, games use a (highly) simplified mesh for their collision tests (collision hull).
Now, professional game development teams that can afford to write their own tools can usually generate such hulls from their visual data. For single developers, that may not always be possible. Sometimes, it's worth it to create these hulls manually, but that's a tedious process and it's easy to forget to keep things synchronized. So, I'm just telling you this to make you aware of it. :)
Either way, it's hardly ever necessary to load the same data twice. If a sound reaction system needs to do some collision tests, let it ask the collision system. Don't load the collision data and write collision tests in the sound reaction system - that's both duplication of code and of data. Bad stuff indeed.
Quote:When hitting a polygon, its not automatically said that I enter "solid space". I could identify that with a material setting though.
Using materials sounds like a nice solution to me. But hey, test it and see how well it works. :)
Quote:My idea for my game has not really been made so far either, so I guess I'm slightly in unexplored space here :) Have to be carefull not being too ambitious and finally create nothing though.
Wise words. :)
For the client implementation: (audio coming from speaker)
i would just go with a normal raytrace from source to listener (using whatever collision system you are using). If it hit something, allow your collision data to have a property called obstruction or something, and this value you just pass into the sound system, together with distance.
Use this togehter with the portal system so you dont play sounds that the listener cant hear. (Doom3 uses the portal system for optimizing audio as well)
if this becomes a hit you can build a simpler collision tree for the sound system.
For the monsters and reaction you should just use rpg rules like Captain P suggested.
i would just go with a normal raytrace from source to listener (using whatever collision system you are using). If it hit something, allow your collision data to have a property called obstruction or something, and this value you just pass into the sound system, together with distance.
Use this togehter with the portal system so you dont play sounds that the listener cant hear. (Doom3 uses the portal system for optimizing audio as well)
if this becomes a hit you can build a simpler collision tree for the sound system.
For the monsters and reaction you should just use rpg rules like Captain P suggested.
Sorry for the late reply, busy busy busy. It's rude forgetting to thank you guys :)
I think I'll go for the low-poly version of my world for 'sound-wave ray-tracing'. You're right about lots o' work creating that second variant of the world. But I needed that mesh anyway (physics and realtime ambient lighting).
I'm still a little bit afraid of the performance with raytracing. Calculating a couple of rays is no problem, but what if there are 4 persons (or more) firing a gattling canon? Of course, not every sound should perform a check, and enemies that have already been alerted don't have to listen continuously. For example, I could tell that characters can only register 1 sound per x seconds, unless a much louder(higher priority) sound is played. Together with some other tricks to prevent unnecessary rays, this should be a good solution I think.
Since the collision data is used by the physics and renderer as well, I still need to use the DLL in such a way that the data is shared. To keep things a little bit modular, I think I'll go for a callback function like
"int listener_CanHear( sourcePos, targetPos, maxRange ) // result > 0 means there is still 'volume left'"
Don't know. Have to look at that. This is the first time I'm NOT mixing up all game elements (rendering, physics, sound, specific stuff) into 1 big messy 'engine', so it would be nice not to make both modules dependent on each other :)
--edit--
I forgot 1 important detail here. About these 4 guys firing a gattling gun... Enemy characters can skip sounds to prevent lots of ray-casting, but the player him/herself still has to check each and every sound in order to calculate the proper volume (and effect for the sound). Unless I make a caching system that uses previous results or something.
Thanks for all the tips guys!
Rick
[Edited by - spek on December 2, 2008 1:27:48 PM]
I think I'll go for the low-poly version of my world for 'sound-wave ray-tracing'. You're right about lots o' work creating that second variant of the world. But I needed that mesh anyway (physics and realtime ambient lighting).
I'm still a little bit afraid of the performance with raytracing. Calculating a couple of rays is no problem, but what if there are 4 persons (or more) firing a gattling canon? Of course, not every sound should perform a check, and enemies that have already been alerted don't have to listen continuously. For example, I could tell that characters can only register 1 sound per x seconds, unless a much louder(higher priority) sound is played. Together with some other tricks to prevent unnecessary rays, this should be a good solution I think.
Since the collision data is used by the physics and renderer as well, I still need to use the DLL in such a way that the data is shared. To keep things a little bit modular, I think I'll go for a callback function like
"int listener_CanHear( sourcePos, targetPos, maxRange ) // result > 0 means there is still 'volume left'"
Don't know. Have to look at that. This is the first time I'm NOT mixing up all game elements (rendering, physics, sound, specific stuff) into 1 big messy 'engine', so it would be nice not to make both modules dependent on each other :)
--edit--
I forgot 1 important detail here. About these 4 guys firing a gattling gun... Enemy characters can skip sounds to prevent lots of ray-casting, but the player him/herself still has to check each and every sound in order to calculate the proper volume (and effect for the sound). Unless I make a caching system that uses previous results or something.
Thanks for all the tips guys!
Rick
[Edited by - spek on December 2, 2008 1:27:48 PM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement