Do any games factor in the speed of sound?

Started by
18 comments, last by duke 20 years ago
quote:Original post by AndreTheGiant
I think RTCW did this.
Sometimes if you were getting shot by a sniper from super far away, you would loose health and about 1 second later you would hear the gunshot.


Indeed, and it was a nice effect. You could see the flash of the sniper rifle in the tower, and the report came a short time later. I don''t know how accurate it was, but then again I''ve never been shot at with a Mauser rifle at a great distance.

That game had great sound effects all the way around.

Advertisement
yeah, you can get openAL to use speed of sound calculations and doppler shift and all that sound goodiness with a remarkably few number of function calls. http://www.openal.org/

-me
The MMOG "WW2 Online" uses the speed of sound to great effect (in fact, it makes it really immersive). When you hear an explosion you can pretty much judge how far away it was due to the way the sound also distorts over the distance. Gun fire becomes simple cracks (if you ever watch actual war footage on the news you''ll know what I mean by that). So if you are running towards a battle and see flashes of light and hear the crack of gunfire, it really makes you feel you are there.
I think a new flight sim called "Lock On" did it. Basically if you flew above mach 1 and were in an external view outside of the "cone" you wouldn''t hear anything - at least not the plane engine anymore.
Actually I suspect that many of the games that use real-time doppler etc. (ie. Battlefield to name one) probably do this to extent, because the calculations would be similar... can''t say for sure but I suspect a few do, but we just don''t notice

My guess would be that this type of thing will become completely the realm of the sound hardware in future years... ie. I will tell DirectSound to play this sound, as if it was coming from this exact position in space, relative to where I am. It will figure out all the other fun stuff
3D audio cards already do this - they definetly do doppler shift etc... and I think the result of the correct convolutions is a gross atenutation (silencing) of the audio that works out to the speed-of-sound calculation.


There already is a 3D portion to DirectSound too.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
This would be an absolute nightmare to implement on your own, I would imagine. Using your sound sphere idea, you'd have to store a list of players for every sound. If someone is inside the sphere, you can't play the sound over and over again. And you can't assume that a player isn't going to be moving into the sphere anytime soon (if you implement some sort of teleportation method).

Also if you teleport right beside the object, you're in the sphere but the sound has already travelled passed you and thus you can't hear the sound anymore, you'd have to be at the edge.

I'd call it a nightmare not worth the effect =P


[edited by - GroZZleR on March 24, 2004 9:29:50 PM]
I''m pretty sure both OpenAL and DirectSound are capable of dealing with those effects, so you shouldn''t have to code it yourself anymore People have already mentioned some very good games that make use of it.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
There''s a difference between doppler/localization, and start delay because of propagation. Most sound APIs do the first, but not the second; you have to do that yourself.

The best way of doing it is probably to just queue an event to play the sound in the future. If you move really quickly towards the sound source, the event will start too late, but that''s probably acceptable.

Also, sound is really quite pokish in air (about 340 m/s) -- you probably want to model sound that travels faster, for increased gaming enjoyment.
enum Bool { True, False, FileNotFound };
it shouldn''t be that hard ... just hand a specialized event to every actor that can "hear", a structure that holds the origin, the velocity of the wave (if it isn''t constant), the velocity of the source (if it isn''t a sudden, point sound), and the time of the sound''s generation. If the actor''s distence is within the sound''s range, the event resolves and gets cleared. Kinda like a mousetrap. You should then have the distence, both velocities for the dopler, and a convenient way to store any additional agent data.
If you want to get really complicated, have the sound do a variation on raycasting -- deteremine vectors that will end up reflecting off various surfaces, and generating new sound objects that are virtually behind that surface. Look at a mirror and gauge where an object appears to be, behind the mirror, against where it is, for a real-life example.
Generating echos based on surfaces *without* tracing if it''ll actually be heard first would likely cause too much overhead. Rather than one check per sound per cycle, it''d be one check per sound one face per cycle. Ouch.

-"Sta7ic" Matt

This topic is closed to new replies.

Advertisement