xaudio2 and openal

Started by
6 comments, last by HeWhoDarez 15 years, 5 months ago
Ok im not sure but why would anyone go for xausio2 over openal? I might be mistaken but it doesn't offer anything except some codec that microsoft alread holds. Looking at both one say it has programmable effects. But thats true for openal too. I dont see it im sorry
Bring more Pain
Advertisement
I find XAudio2 a much saner API. It uses a much more data 'pull' style of interface, while OpenAL draws much of its API design from a primarily data push style.

For this reason, among others, I found OpenAL to be very difficult and unintuitive to use, and now use XAudio2.

[size=1]Visit my website, rawrrawr.com

Quote:Original post by owiley
Ok im not sure but why would anyone go for xausio2 over openal?
A better question might be why anyone chooses to use OpenAL. Updates are very rare, hardware drivers (in the rare case that even they exist) are flaky, and the core library offers very little functionality.

If OpenSL ever catches on in the desktop space, it might be nice, but in the meantime, FMOD and IrrKlang are both pretty nice, as long as you are creating free software (or can afford the license fees).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

from what i read on fmod they actually use openal. The way i see it is that openal is easy to deal with you just create context and device. after that you can start using it. Dont see whats hard about that.To me they're both easy just that you not always getting hardware acceleration.

About drivers not being updated well that's not like the industry needs updated driver because x wants to add features. xaudio wont even need an update cause its all on cpu. The only thing i see is that there are codec being added to it.DSP isn't hardware bound so hm ic ant think of any real reasons to update.
The reason im thinking about this is why create a new api which doesnt do any thing different than the old one? Directsound, which xaudio2 replaces, worked just great for the thing they wanted yeah it was harder to code for but no where near impossible.
so why the change?
Bring more Pain
Quote:Original post by owiley
from what i read on fmod they actually use openal. The way i see it is that openal is easy to deal with you just create context and device. after that you can start using it. Dont see whats hard about that.To me they're both easy just that you not always getting hardware acceleration.
The hard part is that it doesn't always work. Unlike OpenGL, OpenAL does a very bad job of managing internal memory, so you spend a lot of time shuffling memory buffers around (when most other APIs take care of it for you), and it is very easy to crash. The Mac implementation was completely crippled for some years, and linux versions often aren't much better.

Quote:About drivers not being updated well that's not like the industry needs updated driver because x wants to add features. xaudio wont even need an update cause its all on cpu. The only thing i see is that there are codec being added to it.DSP isn't hardware bound so hm ic ant think of any real reasons to update.
There used to be a lot of consumer-level hardware sound cards out there, although these seem to becoming rare (at least in part due to the terrible driver support).

Quote:The reason im thinking about this is why create a new api which doesnt do any thing different than the old one? Directsound, which xaudio2 replaces, worked just great for the thing they wanted yeah it was harder to code for but no where near impossible.
so why the change?
I would guess it was something to do with moving to a new driver model for Vista. I believe Microsoft was attempting to roll audio support back into their own control, after many years of being plagued by broken 3rd-party audio drivers.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:I would guess it was something to do with moving to a new driver model for Vista. I believe Microsoft was attempting to roll audio support back into their own control, after many years of being plagued by broken 3rd-party audio drivers.

That so sound like them.

So audio is the only component that is lacking in today's market. What are some thing that could make this better as developers? Compared to graphics and even physic sound in games sound great but could be pushed a little harder. Things like programmable environment field might be nice or some new open sound standards for game's sfx???
Bring more Pain
Quote:Original post by swiftcoder
I believe Microsoft was attempting to roll audio support back into their own control, after many years of being plagued by broken 3rd-party audio drivers.


I think it's more to take advantage of Vista's new audio processing model - Directsound was a low-level API which could crash the kernel if something went wrong in XP. XAudio2 can only crash the application, not cause system meltdowns. I think that's the main reason they needed a whole new API, not just a bolt-on fix for Directsound - it's a complete architectural change under the hood, even if the featureset is similar.
Construct (Free open-source game creator)
Hey guys,

I am glad I found this thread as you guys seem like you may be able to help me.


I am in the process of writing a small demo app that will perform the effects of spatialisation on in game voice communications.

I originally envisioned working in OGRE using directSound (which is still a possibility), but when I read about the features of XAudio2 I got excited at the idea of being able to work with reflections, obstruction and occlusion.

Which is really like trying to run before I can walk.

However, I have found nothing in the API talks about capturing input from a microphone. I think its because I dont know what to look for, and obviously there is little/no documentation on XAudio2 that might take me through the process. Is there anywhere I can go for this information?



In the meantime, I have been advised to use Juce by my student advisor:



Which looks like it will tick all the boxes.

Now, it seems, that I dont neccesarilly have to build a 3D environment to create my demo app, as I could use a simple 2D window to represent spatialisation on the Z,X planes ala: XAudio2 app in DX SDK.

I am reluctant to use Juce because I desperately want to get my hands dirty looking at integrating this technology to a 3D game and the OGRE / XAudio2 way looks great albeit complicated.

Am I being unreasonable considering the circumstances?

[Edited by - HeWhoDarez on November 5, 2008 7:45:56 AM]
Perfection is a product of progress not an alternative.

This topic is closed to new replies.

Advertisement