Delay in hearing sound

Started by
5 comments, last by Grecco 16 years, 9 months ago
Imagine an aircraft starts flying at 3400 meters from you and it flys towards you with 340 m/s (near sound velocity). in this case you don't hear the sound of its engines before first 10 seconds. but when i implement it by direct sound, sound of aircraft's engines is heared before 10 seconds. Could you help me?
Advertisement
Quote:Original post by ahmadi86
but when i implement it by direct sound, sound of aircraft's engines is heared before 10 seconds. Could you help me?
And how do you implement it in DirectSound? Can we either see some code, or an explantion of what functions and algorithms you use?
I just instantiate a sound source and a listener then i set their positions and velocities in each interval.
Quote:Original post by ahmadi86
I just instantiate a sound source and a listener then i set their positions and velocities in each interval.
Well if you want a delay to match some form of physics-based simulation then you need to implement it yourself. I'm not aware of DirectSound having a delay feature built in as it's trivial to just, well, start playing the sound N milliseconds later [smile]

Or am I misunderstanding you?

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

The velocity and position of buffers and listeners only affects doppler effects and volume / pan respectively. DirectSound isn't a proper simulation of the real world. If you want delay added, you'll have to to that yourself; which as Jack said, is pretty trivial.
And to say it differently:

To DirectSound you are not saying - these sounds started "here" and "this time" as you add them ... you are saying, the listener (wherever they are) is currently experiencing these sounds, at these locations.

So they are giving you a part of the real world simulation (sound amplitude loss over distance and dopler effect due to velocity).

There may also be some material and reflection approximations ... but in the real world there would be many complex effects of reverb, and material and space based shifts ... as well as delay due to distance, etc that it simply doesn't address.

Realize that if it did do this, you'd need the ability to override it (because when I want everyone in my game to hear the voice of god in their head from above, I don't want to have to synchronize this effect by traveling backwards in time to start it :).

But it sure would be awesome if such a feature we're optionally available.
I think I have encountered this issue before and I want to present the outline of my solution to you. I think you might find this helpful:

On top of DirectSound, I implemented a kind of 'SoundManager' system that tracks what sounds exist in the simulated world, and who can hear them. In case of the player, the sounds audible to him/her are then played by DirectSound.

To have the effect you described, sounds are modelled as follows:
When starting, a sound source's position is noted and a sphere is created that expands at the speed of sound (the soundfront). Listeners not in that sphere cannot (yet) hear the sound.

If the sound is non-continuous, the moment it stops, another sphere is created that also expands at the speed of sound (the soundback). If a Listener is between those two spheres, he/she will here the sound. The exact location will determine which part of the sound is heard.

I've made a small movie with a (very) old demo that visualizes the soundfront bounding boxes, you can download it here
Simulation & Visualization Software
GPU Computing
www.organicvectory.com

This topic is closed to new replies.

Advertisement