Sign in to follow this  

Processing DirectSoundCapture data

This topic is 4379 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, I'm trying to use captured sound in my game, and I've managed to get sound data from DirectSound fine, but I don't know how to use the captured wave data. Does anyone know of any good libraries/references for processing the wave data return in DirectSoundCapture buffers? I've looked online a lot but can't find anything useful. Ideally I'd like to be able to break down the sounds into frequency ranges and measure how much sound there is within those ranges - like a graphical equaliser. Thanks for any help, Duncan

Share this post


Link to post
Share on other sites
Thanks for that :) I've found a library called FFTReal... though I don't really get how to use it, do I just bung the sound signal into an array, throw it in, and then something magically pops out? Has anyone used this lib before? The test samples are so wrapped in template garbage I can't see how to use it.

Share this post


Link to post
Share on other sites
Really I don't actually know how to deal with the bytes in a capture buffer, and I haven't been able to find any tutorials that don't do something as simple as just throwing it into a WAV file...

... does anyone know any tutorials that do something non-trivial with capture buffer data?

Share this post


Link to post
Share on other sites
The core of any FFT library is the transform and inverse transform functions.

The transform, given a time-domain sample (a byte array of the audio data in this case), returns a frequency-domain sample (an array of imaginary numbers describing the amplitudes of the frequency bands and their phases).

The amplitudes of the frequency bands are essentially what you see in an audio frequency visualizer. The values and their sampling points need to be scaled up logarithmically, so as to weight the amplitudes of the different frequencies to a common scale for display (you know what I mean once you see the values unfiltered).

The inverse FFT reverses this process - it takes the array of the frequency band amplitude and phase information and combines the data to form a sample of sound.

If you want to do - for example - band filtering or boosting, first transform the sample to frequency data, then scale the frequency bands' amplitudes by your own multiplier array, and then transform the data back to a sample.

Share this post


Link to post
Share on other sites

This topic is 4379 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this