• Advertisement
Sign in to follow this  

DirectSound and software

This topic is 4310 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Lets say I want to do all the sound effects and caculations in software and then only write the end result into the sound main buffer (The one which is actually being played). Then I thought, the sound data Hz is much faster then the screen FPS, so I will probabbly need to do the sound calculations in another thread and some how synchonize it to the game's main loop which will be a lot slower. Is that true? Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
Hi,

If you're doing a lot of CPU intensive work then switching it over to another thread might well be a good thing (especially if you're interfacing with the hardware or plan on utilizing multi-core setups), but I doubt you'd actually need it.

I'm not entirely sure quite what you're hoping to achieve (surely the runtime/driver/hardware will be better at mixing/compositing sounds?), but some sort of "write ahead" mechanism where you create the audio for several frames ahead of where the game currently is. That should protect you from some hiccups where the audio requires data quicker than the game thread can provide it...

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
Hi,

If you're doing a lot of CPU intensive work then switching it over to another thread might well be a good thing (especially if you're interfacing with the hardware or plan on utilizing multi-core setups), but I doubt you'd actually need it.

I'm not entirely sure quite what you're hoping to achieve (surely the runtime/driver/hardware will be better at mixing/compositing sounds?), but some sort of "write ahead" mechanism where you create the audio for several frames ahead of where the game currently is. That should protect you from some hiccups where the audio requires data quicker than the game thread can provide it...

hth
Jack


But there is a problem with that. If I want to do 3D effects for instance, then the sound output depends on the the state of physics of the game.
Plus, if the game screen stutter, lets say because the video card isnt fast enough, it might affect the sound output.
I am not sure I am going to do anything like that, I just wonder about how it should have been done.

Share this post


Link to post
Share on other sites
The Hz of the sound has nothing to do with the question of "should it be in the main thread or its own thread"

You are not going to write a single 16 bit sound sample to the sound buffer at a time, so the Hz is irrelevant. You are going to have to mix your audio in chunks, then copy the chunk to the sound buffer.

So the question about threads is one of - can you seperate the logic of the audio rendering so that it has only loose syncronization needs with the rest of the game. For instance if you simply need to lock around the gather of a few pieces of data - which are driven by the game engine, and then from their run your calculations, then a seperate thread is definately the way to go. If however you need to read and read and read game state values during the audio computations ... then a seperate thread not only doesn't help, but the constant locking or simple outer lock is going to add overhead and complication to your game engine.
----
Note that you will be PRE-rendering your audio, because you will be rending audio that will go into the buffer out ahead of the current play position ... so there is a certain latency involved in game state changes and audio feedback in such a situation (there is always a latency, but this type of situation causes a direct realization of the conflict between rendering acurate audio with low latency and rendering batches of audio efficiently without significant syncronization costs).

Share this post


Link to post
Share on other sites
Well, it is possible to do the following:
On the main thread, with the main loop. Pre render one second of sound, according to the current state.
In the next frame, if less then 1 second passed, then prerendering another 1 second, but overwritting where the main sound buffer didnt reach yet.
Do you think this would work? It might have a problem if the frame drops to less then 1 FPS.
Maybe 1 second is not the optimal rendering time, maybe it can be adjusted to the time of the last frame rendered, or to the screen's current maximum refresh rate.

Share this post


Link to post
Share on other sites
The general idea is sound ... you will have to tweak the numbers and details to get it to work well on different computers - I know because I just completed a piece of software that controls the radio SHARK (a usb am / fm radio) and it took some trial and error to get sample sizes and code that kept feeding audio correctly enough of the time to sound good (there will always be skips or silence if windows starves your appilcation for too long, nothing can be done about that).

Currently I use 1/8th second chunks (mainly because that's what the direct sound sample used I beleive, and also because we wanted our audio to be accurate to approximately 1/2 second (and we needed a few samples ahead to deal with the case of missing updates). I believe you will find times greater than 1 or 2 seconds latency will sound audibly late, and samples size smaller than 1/15th or 1/30th of a second require more messaging and accuracy than you are likely to be able to maintain (although you could use 1/24th or 1/30th second sizes, and just have 5-10 samples rendered ahead of time, for the cases of being starved).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement