DirectSound - CaptureBuffer to SecondaryBuffer ?

Started by
1 comment, last by CrushingHellHammer 19 years, 9 months ago
Hello All, I'm new to DirectSound and have been trying to experiment some. I'm attempting to write an application that, for now, does the following: 1. Play a pre-recorded Wav file 2. Records the mic input to a Wav file 3. Streams the mic input to the output (speakers) Ideally, I'd like to combine 2 and 3, such that with a checkbox indicating whether I want to record or not, the mic input is always heard over the speakers. I've managed to write 1 and 2, with a little help from the documentation and examples provided. My problem is with 3. How do I get the SecondaryBuffer (used for playback) to read the contents of the CaptureBuffer? Ideas and/or code snippets would be much appreciated! Thank you! p.s: I'm using C# .NET, if that makes any algorithmic difference.
Advertisement
You are looking at what is known as the full duplex problem. I have seen some examples in the SDK documentation but it has not told me a thing about how it really works why things are done the way they are. There also tends to be remarks about whether or not your sound card supports full duplex.

I have a sound blaster live and there is software with it that allows what you record to be heard, but that does not answer your problem.

I have similar needs and have run across the following (sorry I know nothing about C NET or MFC).

I am writing a digital radio decoder and encoder. The radio continuously listens to input (DirectSoundCapture) for a signal. Every once and a while I also have to transmit (DirectSound).

I wanted to (and do) do both in a single high priority thread. I set up notification events every x milliseconds in the DirectSoundCapture buffer. When I wanted to transmit, I would send x milliseconds worth of data to the DirectSound playback buffer.

Now as long as the sample rates were the same, everything should be fine. However, if you want to play out what you record, there will be a delay. First you have to wait x milliseconds to capture the data, and then you send it. So the minimum delay (assuming no DSP work) is the time between your notification events.

However, there is more trouble than that. I discovered that even though the samplerates for both the playback and capture buffers were set to be the same, in practice they weren't!

I still don't understand why. Rumor has it that the samplerates for the capture buffer are only accurate at specific values, all others are somehow interpolated and that leads to small differences. If you are happy with 22050 and 44100 Hz sampling rates, these two don't have that problem.

What I did for other sample rates was to 'skip' a load into the playback buffer or load twice, depending upon whether the capture or playback buffer was going faster. My application required different sampling rates than 22050 or 44100 so I had to solve that mess somehow.

There maybe more to this full duplex issue...I made lots of remarks and questions about it on this site earlier. But no one was able to give any explanations or answers.

Brian

Brian Reinhold
Hi Brian,

Thanks for your reply. Your application sounds interesting! I'll let you know if I get my app to do what I want it to do, and will be willing to share the code if you'd like it.

I'm attempting to write an application that (apart from the playback and record functions) allows you to stream the audio input to (as a start) the speakers.

A further aim of this application is to use this data (either in playback or stream mode) to perform some "real-time" audio analysis.

I understand there is an inherent latency using DirectSound, but at this stage it isn't terribly critical for me. I'm trying to understand the basics of DSP programming and functionality is more important to me at this point.

Thank you for the information regarding the sample rates and potential problems there. I hadn't considered that. 44.1KHz should do for now.

I've been working on this since yesterday and I think I *almost* have it.

My code reads the CaptureBuffer as follows:

StreamCaptureData = (byte[])ApplicationCaptureBuffer.Read(NextCaptureOffset, typeof(byte), LockFlag.None, StreamLockSize);

where:
byte[] StreamCaptureData = null; and
int NextCaptureOffset; and

NextCaptureOffset += StreamCaptureData.Length.

StreamCaptureData is the array into which I push the CaptureBuffer (ApplicationCaptureBuffer) data.

My question is what kind of SecondaryBuffer do I need to specify to read from StreamCaptureData? There are 7 overloads for the secondary buffer...which one works best?

The 7 overloads take as their first arguments either

1. FileName
2. Stream
3. BufferDescription

Am I correct in ruling out 1 and 3 in this case, as the CaptureBuffer data goes to an array "StreamCaptureData" ?

Since that leaves only the possibility of 2 being used, do I need to create a stream that reads from the array "StreamCaptureData"?

Thanks once again, in advance, to anybody that can help!

This topic is closed to new replies.

Advertisement