Gyannea

Members
  • Content count

    143
  • Joined

  • Last visited

Community Reputation

122 Neutral

About Gyannea

  • Rank
    Member
  1. Looks like an earlier version of a post I have just responded to. It looks like the provided code is supposed to be some function that you call each time you want to record some data. In that case: Some questions and remarks: Do you create the capture buffer every time you call this function, because you don't destroy it when you are done. You don't set the position when you start. Not a problem on the first call but after the first stop the record position is going to be where ever that stop is. How do you determine where to read? Is it safe? Its not clear what 'doesn't' happen on your second call. Do you get just garbage or nothing (have you simply reread what'a already in the buffer)? Brian
  2. The behavior you describe sounds like playback problems, not recording problems. Once you start the recording process you have to be very careful about timing issues and where you read the data in your buffer (with the ->Lock() and ->Unlock() functions) so you are not reading from a part of the buffer that the hardware is writing to. To ease the timing issues and the problems Windows can give you with it's no real time performance use can use the Notification routines which will wake a waiting thread when the buffer reaches certain points (then you can read up to those points). Without seeing any code of what you actually did there is no way to judge what is going on. Brian
  3. Opps! A confusing typo: I meant "put in the CORRECT flags" not "put in the write flags" in that latter paragraph. That typo could really lead to confusion! Brian
  4. The problem of loading a wave file into a static buffer is that you have to know about the wavefile. When you use Disrect Sound in low level (like I do) with no classes to bury all the dirty work, you have to set all the features of your buffer. For example, stereo, mono, 8 bit, 16 bit, sample rate, etc. There is a bunch of junk. So at the header of every WAV file is all that junk. So you have to read in the wave file, examine the header, format your DirectSound buffers accordingly, and then from the header find where the actual data starts. Once you have your buffers formatted properly, you can read in the data and start the old 'Play()' function. That's a lot of annoying work. I do not know the details of the WAV file header. But all classes which read in a wave file and play them in a Direct Sound buffer have to go through all those stages. The only addition when considering streaming data is that you have to set up some kind of notification (be careful...put in the write flags when you create your buffer) that will signal your write thread to load more data into the DS buffer. You have to do this far enough ahead of time so that you don't disrupt the flow. That also means keeping track of where you are in the file so you can read the next bunch of data. Hope this makes sense. Brian PS I have not looked at this site for a couple of months. So I just saw your message today.
  5. How to learn DirectMusic/Sound

    Actually, "learning" all that initialization stuff isn't too bad because it's kind of cookbook. Who knows what really goes on under the hood? I don't, but many times I wish I had access to that. Look as follows: /*=============================================================================================*/ /* */ /* INITIALIZE DIRECT SOUND OBJECTS */ /* */ /*=============================================================================================*/ long int Msoundcard::InitDirectSoundAndCapture(void) { WAVEFORMATEX srxwave; //--- INITIALIZE COM:--------------------------------------- if(FAILED(CoInitialize(NULL))) { return(DSC_COINIT_FAIL); } //--- CREATE CAPTURE OBJECT:------------------------------------ if(FAILED(DirectSoundCaptureCreate(NULL, &pdsRXOb, NULL))) { return(DSC_CREATE_FAIL); } //--- GET CAPTURE OBJECT CAPABILITES:--------------------------- dsRXCaps.dwSize = sizeof(DSCCAPS); if(FAILED(pdsRXOb->GetCaps(&dsRXCaps))) // Capabilities in structure 'dsRXCaps' { return(DSC_CAPS_FAIL); } //--- CREATE DIRECT SOUND (SEND) OBJECT:------------------------ if(FAILED(DirectSoundCreate(NULL, &pdsTXOb, NULL))) { return ((DS_CREATE_FAIL)); } //--- GET SEND OBJECT CAPABILITES:----------------------------- dsTXCaps.dwSize = sizeof(DSCAPS); if(FAILED(pdsTXOb->GetCaps(&dsTXCaps))) // Capabilities in structure 'dsTXCaps' { return(DS_CAPS_FAIL); } //--- SET COOPERATIVE LEVEL:------------------------------------ /* One must specify a handle to some window for the cooperative level. The question is which window. This will need some experimentation. I could set it to the DSCMain handle since that window is always present as long as the program runs. How about the Desktop? */ if( FAILED( pdsTXOb->SetCooperativeLevel(DSCMain->GetRealWindowHandle(), DSSCL_PRIORITY ))) { return(DS_SET_COOP_FAIL); } //--- PRIMARY BUFFER:------------------------------------------ /* These steps are for full duplex behavior (simultanious transmit and receive) that I don't fully understand. The goal is to set the format of the primary buffer to that of the Send object. Not doing this step has not effected the behavior of this program...that I know of. All secondary buffer objects created will have the same format as the primary buffer. */ ZeroMemory(&dsTXbd, sizeof(DSBUFFERDESC) ); dsTXbd.dwSize = sizeof(DSBUFFERDESC); dsTXbd.dwFlags = DSBCAPS_PRIMARYBUFFER; //--- CREATE PRIMARY BUFFER:------------------------------------ if( FAILED(pdsTXOb->CreateSoundBuffer(&dsTXbd, &pdsTXPriBufOb, NULL))) { return(DS_CREATE_PRIBUF_FAIL); } //--- SET PRIMARY BUFFER FORMAT:-------------------------------- if( FAILED(pdsTXPriBufOb->SetFormat(&rxwave))) { return(DS_FORMAT_PRIBUF_FAIL); } //--- GET PRIMARY BUFFER FORMAT:-------------------------------- if( FAILED(pdsTXPriBufOb->GetFormat(&srxwave, sizeof(WAVEFORMATEX), NULL))) { return(DS_FORMAT_PRIBUF_FAIL); } //--- CHECK THEY ARE THE SAME:--------------------------------- else if(memcmp(&srxwave, &rxwave, sizeof(WAVEFORMATEX)) != 0) { return(DS_FORMAT_PRIBUF_FAIL); } return(ALL_OKAY); }[/source Okay, you have to have documentation for the correct names of all the structures that are used in these intialization routines but that too is 'follow the recipe'. Above I have an extra step of dealing with the so-called primary buffer, but one usually skips that step and just deals with creating secondary buffers. The secondary buffers are where you actually store your sound data that is sent to the soundcard. (I also record from the sound card, and there I need a capture buffer.) What I do is set up the capture object and the playback object as above. Then I create a capture buffer from the capture object and secondary buffers from the playback object. All that junk is cookbook and done only once. Then comes the real stuff. Creating your data for playback or loading from a data file and loading it into the playback buffer. There is a function to do this called ->Lock(). Once you loaded your data you call ->Unlock(). And that's it. Those two functions do all the work. The challenge is timing, of course; writing a routine that continuously loads data into the playback buffers and sends them to the soundcard in such a way that there are no gaps in your sound. In DOS we used interrupts to do that job. When the soundcard ran out of data or got to a certain point an interrupt was signalled which caused your interuppt handler to load the next batch of data into the soundcard buffer and so on. In Windows the closest approximation you can get to that is "notification" events. You use another function to load "notification points" into your playback or capture buffer. Thats another DirectX object. When the soundcard hardware reaches those points it signals your routine for loading (or recording) the data. Windows uses the infamous "WaitOnxxxxxEvents()" function to do that. You end up having a thread with: while(1) { WaitOnXXXXXEvent(whatever the event handle is); ....loading code.... } However, don't think you can do anything in accurate real time. Due to multitasking and other chores, the time between the buffer "interrupt" and your event being signalled is not 100%predicatable. So for playback you have to be sure to load the data far enough ahead so that the soundcard will always have something to play. I do this for sending and receiving digitally encoded data over radio. Brian Edited by Coder: Please use source tags. Please read GDNet Forums FAQ too [Edited by - Coder on August 2, 2004 1:27:03 PM]
  6. Has anyone had experience with Direct Input for serial communications....how about over the ubiquitous USB port? My understanding is that Direct Input is alos good for output. Am now reading, but if anyone knows something about this help would be greatly appreciated! Brian
  7. Does anyone know how much memory the creation of a Direct Sound Object takes? For example, all one needs to do to create several sounds is to create the Direct Sound Object and then create secondary sound buffers from that object. I would like to wrap all that into a class. However, if the amount of resources and memory it takes to create the base sound object is large, then I will have to pass that "base" object to the class instance (after creating the base object as a global just once). The latter is more efficient (only one copy of the base object) but results in messier code. Brian
  8. So it looks like its time to shop in the standard Windows Multi media API. I can see that having such features in the Direct X libraries would be, perhaps, and overkill. You certainly don't need speed for such setups! Brian
  9. THe Windows volume control is able to control much more than just volume. It can control what is recorded and played back. For example, you select to have the "Line In", CD, or mike, etc. signals played to the speakers or muted. There are similar controls for what is recorded. Does anyone know how that is done? There is nothing in the DirectSound and DirectSoundCapture that I can find that gives me this control. It is important because I use the sound channels to send and receive digital data over a radio. The last thing I need is for the "Line In" (received) data to be sent back out as the "Line out" (transmitted) data! Brian
  10. Dsound.h problems with DevCpp

    I remember having that problem....but what I don't remember is how I solved it. I think it had something to do with the setup of the Visual C++ library options, or maybe even the directories. I think Visual C++ comes with its own Direct X stuff and what you may need is in a newer version (which you may have on your system, I don't know). That's where the directory stuff comes in. The "settings" options is under the "project" item, the "directories" list is under "tools/options". See what you have there for include file directories and make sure that there is a Direct X directory first in the list...the correct version of Direct X! If you are using another compiler, you'll have to figure out on your own how to include the correct libraries and include files. I'm sorry, but I don't know what devCpp is. Brian
  11. Directsound help

    I don't know about MFC but if you do it directly using the DirectSound API functions then what you load into the DirectSound buffer object is the sound data itself. However, you must specify the format and other tidbits of information in a structure when creating these objects (type of data, sample rate, channels (stereo/mono), bit depth (8 or 16 bit), bytes per second (samplerate * channels * bitdepth/8), etc.) Its quite straight forward and there is nothing hidden from you. When you use shells you have to be very sure that you follow instructions exactly to the tee and your supplied data complies equally as rigidly. Brian
  12. Hello fellow sound programmer. I am sorry to say that I am totally unfamiliar with the wrappers you are using. I don't know what they do or what the parameters mean. I am using C++ and have to create the basic DirectSound and DirectSoundCapture objects and use their basic API functions. That means in order to read the data I have to call the "->Lock()" functions first, read the buffer, and then ->Unlock() it. There is NO way that you can avoid a delay if you are going to process the data in real time. You would have to read each sample as they are created which would then mean diving into the sound card buffer 44100 times per second, handle it, and place the result in the playback buffer 44100 times per second. All your porcessing would have to be done in that short interval. I don't think multitasking windows could handle such a rapid notification rate. I have already found by experimentation that there is a random delay between the signalling of the notification event and the time your thread that handles the notification gets the okay. You can see this delay by calling a API function that gives the current position of the hardware write positions and compare it with the notification positions. Take into account the 'safe' distance, of course. I have working code that sets up the capture and playback buffers. It involves creating the DirectSoundCapture object, and then creating the DirectSoundCapture Buffer from that object. You have to feed these functions with structures that give information about the size of the buffer, format (which includes sample rates and all that stuff), etc. Once the capture buffer is created you have to create a Notification Object based on that buffer. Then you pass a structure loaded with an array that has the places in the capture buffer where you want notifications. That also means creating a Windows Event object so that the notification signals this object. And I'll say this...your better off placing your notification events in the capture buffer rather than the playback buffer. In order to do the latter you have to specify special flags in creating the playback buffer, and those could lose you hardware accelleration. AFAIK that doesn't happen in the capture buffer. So I let the capture buffer notifications give me the times to load the playback buffer as well. Once all the garbage is set up: you have a thread which is in an infinite loop. The first function in that loop is one of the famous Windows "WaitOn...Objects()". The thread sleeps until the capture buffer hits a notification point and then the wait function in the thread is freed. At that point you have to call the Lock function which gives you back a structure full of info about reading the buffer. It gives two pointers where you can then "memcpy" the data from. 2 are needed since the buffer is circular and there may be wrap around. Then you Unlock the buffer and the whole thing goes on and on. Any wrapper class that you may have has to perform these tasks. Fortunately the setup is done only once and the final loop involves only calling the Lock and Unlock API functions so its pretty efficient. Thats VERY important if you are going to do some serious DSP work! Brian
  13. You are looking at what is known as the full duplex problem. I have seen some examples in the SDK documentation but it has not told me a thing about how it really works why things are done the way they are. There also tends to be remarks about whether or not your sound card supports full duplex. I have a sound blaster live and there is software with it that allows what you record to be heard, but that does not answer your problem. I have similar needs and have run across the following (sorry I know nothing about C NET or MFC). I am writing a digital radio decoder and encoder. The radio continuously listens to input (DirectSoundCapture) for a signal. Every once and a while I also have to transmit (DirectSound). I wanted to (and do) do both in a single high priority thread. I set up notification events every x milliseconds in the DirectSoundCapture buffer. When I wanted to transmit, I would send x milliseconds worth of data to the DirectSound playback buffer. Now as long as the sample rates were the same, everything should be fine. However, if you want to play out what you record, there will be a delay. First you have to wait x milliseconds to capture the data, and then you send it. So the minimum delay (assuming no DSP work) is the time between your notification events. However, there is more trouble than that. I discovered that even though the samplerates for both the playback and capture buffers were set to be the same, in practice they weren't! I still don't understand why. Rumor has it that the samplerates for the capture buffer are only accurate at specific values, all others are somehow interpolated and that leads to small differences. If you are happy with 22050 and 44100 Hz sampling rates, these two don't have that problem. What I did for other sample rates was to 'skip' a load into the playback buffer or load twice, depending upon whether the capture or playback buffer was going faster. My application required different sampling rates than 22050 or 44100 so I had to solve that mess somehow. There maybe more to this full duplex issue...I made lots of remarks and questions about it on this site earlier. But no one was able to give any explanations or answers. Brian