• Advertisement

Xangis

Member
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

184 Neutral

About Xangis

  • Rank
    Member
  1. Buried deep in the darkest recesses of the documentation I've found my answer: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dmusprod/htm/fileoutputinanaudiopath.asp
  2. I have a an application that plays 8 channels of audio, via DirectMusic Segments each with its own AudioPath (so I can control effects and volume separately) using a single DirectMusic Performance. It works beautifully, but there's something I'd like to do and I'm not sure whether it's possible or whether there's a sensible solution: I'd like the audio that is being rendered to the speakers (a.k.a. Final Mix) to also be captured and rendered to a wave file. Are there any special hooks I can latch onto to get the rendered audio? Some way I can snoop a copy of the output buffer? Maybe some sort of callback? Perhaps a super-secret undocumented logging flag I can enable? A reflexive capture mode? Any hints, suggestions, or creative ideas would be greatly appreciated.
  3. OpenAl in VR Application

    For velocity and orientation you should be able to use these: alSource3f( PlaybackHandle, AL_VELOCITY, x, y, z ); alSource3f( PlaybackHandle, AL_DIRECTION, x, y, z ); OpenAL doesn't handle mass. That is something you will have to handle on your own. Same goes for angular velocity, but you can track linear velocity in OpenAL.
  4. Running 2 sound cards/devices in one pc

    I have five sound devices on my home PC. 1) Audiophile 24/96 - for recording, but it really is a terrible card. I actually keep a WinME system just so I can still use my Lexicon Core 2 card. 2) Yamaha YMF744 card - mainly for XG tinkering. 3) Onboard AC97 - Don't actually use it for anything at all, sounds awful. 4) SB Live Value - For gaming audio. 5) Monster Sound MX400 - For no apparent reason, should probably remove it. They all coexist peacefully. You may want to get a mixer or a switchbox to be able to hear all cards at once (I have an A/B/C pushbutton switchbox, even though I pretty much always use the Live for output). The only trouble you really have with mixing soundcards is applications that don't enumerate devices and let you choose. Even then, all you really need to do is make sure to choose the correct card as your default in the control panel before starting the app. I typically use separate cards for recording, playback, and MIDI, and as long as you have stable drivers it's pretty easy to deal with. I've always had at least 3 soundcards in a PC since Win98 and not had trouble.
  5. I was originally setting lpwfxFormat to NULL and tried actually setting values for it as a test - but it was the same result either way. It turns out that CreateSoundBuffer didn't like my adding anything at all to DSBCAPS_PRIMARYBUFFER (I had originally tried it without the DSBCAPS_CTRLFREQUENCY, DSBCAPS_CTRLVOLUME, and DSBCAPS_CTRLFREQUENCY). The problem, however, was my adding DSBCAPS_GLOBALFOCUS. So, working corrections are: dsbd.lpwfxFormat = NULL; dsbd.dwFlags = DSBCAPS_PRIMARYBUFFER; And for the secondary buffer, I use: dsbd.dwFlags = DSBCAPS_GLOBALFOCUS | DSBCAPS_CTRLPAN | DSBCAPS_CTRLVOLUME | DSBCAPS_CTRLFREQUENCY; And everything works out as I had originally intended. Thanks to Evil Steve for that tip about the debug slider and thanks to Dave Hunt for your post - to be exact it was part of the solution. :)
  6. I've explored the ALSA API a bit, and there are some great examples at alsa-project.org that describe the basics of creating a playback buffer and dumping sound into it or creating a capture buffer and reading data from it. However, the documentation I've found for features beyond the basic sample is fairly terrible (as in, doxygen-only). Doxygen is great for class reference, but what it doesn't do is fill in conceptual gaps and/or implementation details. The main questions I have about ALSA are: 1) Are you on your own for panning (have to calculate sound levels for each sample before sending to the buffer) or is there some functionality, i.e. a buffer->setPan() equivalent. 2) Same question, but for volume. Do you calculate it yourself before filling the buffer or is there a buffer->setVolume() equivalent? 3) Is there any good documentation available for using recording callbacks?
  7. I'm having a bit of trouble with the following DirectSound initialization code. I'm trying to create the primary buffer and set the format (secondary buffers will be created later). In the call to CreateSoundBuffer at the end, I get a failure the ever-so-uninformative E_INVALIDARG error message. Any clues as to what I might be doing wrong? m_parentWindow = (HWND)parentWindow; HRESULT hr; LPDIRECTSOUNDBUFFER primaryBuffer; int count; // Initialize COM if( FAILED( hr = CoInitialize( NULL )) ) { return false; } // Create IDirectSound using the primary sound device if( FAILED( hr = DirectSoundCreate( NULL, &m_pDS, NULL ) ) ) { MessageBox( NULL, DXGetErrorString8(hr), "DirectSoundCreate Error", MB_OK ); return false; } // Set coop level to DSSCL_PRIORITY if( FAILED( hr = m_pDS->SetCooperativeLevel( m_parentWindow, DSSCL_PRIORITY ) ) ) { MessageBox( NULL, DXGetErrorString8(hr), "DirectSound SetCooperativeLevel Error", MB_OK ); return false; } // Primary buffer and format settings WAVEFORMATEX wfx; DSBUFFERDESC dsbd; ZeroMemory( &dsbd, sizeof(DSBUFFERDESC) ); dsbd.dwSize = sizeof(DSBUFFERDESC); dsbd.dwFlags = DSBCAPS_PRIMARYBUFFER | DSBCAPS_GLOBALFOCUS | DSBCAPS_CTRLPAN | DSBCAPS_CTRLVOLUME | DSBCAPS_CTRLFREQUENCY; // Bytes must be 0 for a primary buffer. dsbd.dwBufferBytes = 0; dsbd.dwReserved = 0; dsbd.guid3DAlgorithm = GUID_NULL; // Set primary buffer format to 44.1kHz and 16-bit stereo output. ZeroMemory( &wfx, sizeof(WAVEFORMATEX) ); wfx.wFormatTag = WAVE_FORMAT_PCM; wfx.nSamplesPerSec = 44100; wfx.wBitsPerSample = 16; wfx.nChannels = 2; wfx.nBlockAlign = wfx.nChannels * ( wfx.wBitsPerSample / 8 ); wfx.nAvgBytesPerSec = wfx.nBlockAlign * wfx.nSamplesPerSec; dsbd.lpwfxFormat = &wfx; if( FAILED( hr = m_pDS->CreateSoundBuffer( &dsbd, &primaryBuffer, NULL ) ) ) { MessageBox( NULL, DXGetErrorString8(hr), "DirectSound CreateSoundBuffer Error", MB_OK ); return false; }
  8. Thank you, that is pretty much exactly what I'm looking for. I take it that this is all raw data in 16-bit format since the 1201 x 1201 x 16bit size of data points adds up to 2,884,802 bytes and that's the filesize... So I guess that's why I couldn't find a whitepaper detailing all of the header info. Cheers, Xangis
  9. I'm working on an RTS project based on Vietnam. I would like to have accurate maps and I've been playing with GIS data, mainly the DTED Level 0 format (I even created a loader/converter for the format even though there are freely available viewer-converter utilities available just to get familiar with it). It's interesting, but not terribly accurate or high-resolution - in short, not exactly useful for what I want to do, which is generate a terrain heightmap for all of Vietnam. In fact, if you compare DTED Level 0 side-by-side with Google Earth, GE wins hands down. I see there are quite a few GIS data formats available (DEM, NED, DLG, E00, GTOPO30, NLCD, LULC, SDTS) and "the good stuff" appears to be either for military use only or part of hyper-expensive private datasets, and much of the data contains information that only pertains to the US and its territories. What I would like is something with a resolution better than 30 arc seconds. Something like 3 arc seconds might be ideal. Something that contains ground cover information or maybe a pair of datasets that I can combine to get heightfield information and ground cover information would be best. Any recommendations? What are my options for data in the $60 or less price range? Is there anything freely available that is likely to suit my needs? (I'm using the Torque engine, so any information that would be specific to that engine would also be helpful) Cheers, Xangis
  10. Has anyone used RakNet with a Linux-based server? If so, is there anything I should be aware of or warned about? I'm working on a game that will use Linux server-side and Windows client-side. The tutorials on the site assume a MS Visual C++ compiler, so that leaves me in the "figure it out for myself" camp for the most part. Creating a full-featured UDP engine with packet reliability capabilities really is a daunting task, so if RakNet will keep me from reinventing the doorknob I'm all for it. I was writing the network engine myself and using a multithreaded model and I plan to stick with multithreaded in RakNet. I've just cracked the vacuum seal on it and it still has the new-code smell... I just hope it's the right size. Cheers, Xangis
  • Advertisement