Jump to content

  • Log In with Google      Sign In   
  • Create Account


OpenAL why is there no group working on it?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
33 replies to this topic

#1 EddieV223   Members   -  Reputation: 1363

Like
2Likes
Like

Posted 18 March 2013 - 02:30 PM

Why is openAL not being developed?  We need a hardware accelerated cross platform API for audio, like openGL is for graphics!  

 

I will never forgive Microsoft for removing the audio HAL from windows.


Edited by EddieV223, 18 March 2013 - 02:36 PM.

If this post or signature was helpful and/or constructive please give rep.

 

// C++ Video tutorials

http://www.youtube.com/watch?v=Wo60USYV9Ik

 

// Easy to learn 2D Game Library c++

SFML2.1 Download http://www.sfml-dev.org/download.php

SFML2.1 Tutorials http://www.sfml-dev.org/tutorials/2.1/

 

// SFML 2 book

http://www.amazon.com/gp/product/1849696845/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1849696845&linkCode=as2&tag=gamer2creator-20

 


Sponsor:

#2 Kylotan   Moderators   -  Reputation: 3324

Like
2Likes
Like

Posted 18 March 2013 - 05:48 PM

People don't care about audio in the way they care about graphics.



#3 TheChubu   Crossbones+   -  Reputation: 3617

Like
2Likes
Like

Posted 18 March 2013 - 06:12 PM

Because Creative loves you. That's why.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#4 Kylotan   Moderators   -  Reputation: 3324

Like
2Likes
Like

Posted 18 March 2013 - 06:43 PM

Oh, to be able to go back to 1998 and give Aureal better lawyers...



#5 BGB   Crossbones+   -  Reputation: 1545

Like
1Likes
Like

Posted 19 March 2013 - 12:42 AM

People don't care about audio in the way they care about graphics.

the graphics library is also often a lot more critical as well.

hardware accelerated graphics: necessary to have good graphical quality and/or playable framerates.


hardware accelerated audio: neither particularly critical nor is the relevant hardware commonly available on end-user systems (IOW: doesn't work with typical onboard audio chipsets).

so, audio stuff generally ends up being done in software.

#6 Kylotan   Moderators   -  Reputation: 3324

Like
2Likes
Like

Posted 19 March 2013 - 10:29 AM

hardware accelerated audio: neither particularly critical nor is the relevant hardware commonly available on end-user systems

 

But that's a circular argument. Hardware accelerated graphics weren't necessary for most of the 1990s, and we enjoyed the games then. But we realised it would be cool to have more powerful graphics. More demanding software inspires more powerful hardware, which permits even more demanding software, and so on.

 

There are several ways in which we could be making good use of hardware accelerated audio, and I listed several in this post. But until we see developers and researchers start attempt these things, and make it clear to hardware manufacturers that they want more power, then we won't see much movement.



#7 samoth   Crossbones+   -  Reputation: 4466

Like
2Likes
Like

Posted 19 March 2013 - 01:22 PM

I think the main reason why there is no huge demand for audio hardware is that it's perfectly possible to do render 20-30 three-dimensional sources in realtime in software, in CD quality (and, without totally killing the CPU). The difference between 20 sources, 200 sources, and 2000 sources is very small, if audible at all. Therefore it is conceivable to get away with fewer.

Monitor speakers and headsets are often of embarrassingly low quality too, so even if the sound isn't the best possible quality, a lot of people won't notice at all (and they'll not notice the difference between the most expensive soundcard and the onchip one, either).

 

It is, on the other hand, not trivially possible to do a similar thing with 3D graphics (not at present-day resolutions, and not with state-of-the-art quality, anyway). The difference between 20, 200, and 2000 objects on screen is immediately obvious. Displays are usually quite good, so the difference between good graphics and bad graphics is immediately obvious, too.

 

That doesn't mean that OpenAL is not being developed at all, however. The OpenAL-Soft implemention, which is kind of a de-facto standard (as compared to the dinosaur reference implementation) undergoes regular updates and implements several useful self-made extensions.


Edited by samoth, 19 March 2013 - 01:32 PM.


#8 Kylotan   Moderators   -  Reputation: 3324

Like
0Likes
Like

Posted 19 March 2013 - 04:12 PM

I'm not convinced that the number of objects was a big factor. For the first 5 years of consumer graphics card availability, pretty much every game that could use a GPU needed a software fallback. You had to be able to show the same number of objects whether you used hardware or software, just at a different degree of quality. The same would apply for sound now. (And by quality in the audio context I don't mean using 96KHz / 24bit sound, I mean simulating reverb, occlusion, etc - things you can't do very cheaply but which you can discern on even the cheapest headphones.)



#9 GeneralQuery   Crossbones+   -  Reputation: 1263

Like
0Likes
Like

Posted 19 March 2013 - 05:36 PM

Oh, to be able to go back to 1998 and give Aureal better lawyers...

I just read up on that court case. Wow, just... wow.



#10 EddieV223   Members   -  Reputation: 1363

Like
0Likes
Like

Posted 19 March 2013 - 05:37 PM

Back in the day I had an xfi extreme music, and a headset with 3 speakers in each ear for real 5.1 surround sound in a headset.  People though I cheated all the time in COD and Medal Of Honor, because I would turn and face people through walls and buildings, I could be ready for them before they turned corners.   It was really just because I could clearly hear their footsteps and gear gingling from far away.  With regular software/mobo audio this doesn't happen at all.

 

Since microsoft removed the audio HAL, hardware accelerated audio pretty much died instantly.


Edited by EddieV223, 19 March 2013 - 05:48 PM.

If this post or signature was helpful and/or constructive please give rep.

 

// C++ Video tutorials

http://www.youtube.com/watch?v=Wo60USYV9Ik

 

// Easy to learn 2D Game Library c++

SFML2.1 Download http://www.sfml-dev.org/download.php

SFML2.1 Tutorials http://www.sfml-dev.org/tutorials/2.1/

 

// SFML 2 book

http://www.amazon.com/gp/product/1849696845/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1849696845&linkCode=as2&tag=gamer2creator-20

 


#11 GeneralQuery   Crossbones+   -  Reputation: 1263

Like
0Likes
Like

Posted 19 March 2013 - 05:37 PM

The next big thing in audio has to be the real-time modelling of acoustic spaces. The extra dimension of realism this would add would be eye opening



#12 Hodgman   Moderators   -  Reputation: 27026

Like
2Likes
Like

Posted 19 March 2013 - 05:57 PM

32 sources compared to hundreds of sources with proper occlusion and implicit environmental effects (I.e. they echo if they happen to be next to a stone wall, not because you explicitly told the source to use the 'stone room' effect) is an unimaginably huge difference. Audio really has been stagnating.

A lot of people have shitty PC speakers, yeah, but a lot of people also have cinema-grade speakers and/or very expensive headsets. Surround sound headsets are becoming very common with PC gamers at least.

Is it possible that in the future, instead of a dedicated audio processing card, we'll just be able to perform our audio processing on the (GP)GPU?

#13 Kylotan   Moderators   -  Reputation: 3324

Like
0Likes
Like

Posted 19 March 2013 - 06:05 PM

GeneralQuery, there's certainly some interesting work happening in that area - for example the 'aural proxies' stuff here - http://gamma.cs.unc.edu/AuralProxies/ - but they are calling 5-10 ms on a single core "high performance", and I would suggest they need to do better than that for it to be widely accepted, especially since none of their examples show how the system scales up to double digit numbers of sound sources.

 

Hodgman, there was some talk of the GPU over in the other thread that I linked to above. From what I understand opinion is a bit divided as to whether the latency will be an issue. One poster there said he could get it down to 5ms of latency, but that was reading from audio capture, presumably a constant stream of data, to the GPU; going the other direction from CPU -> GPU -> PCIe audio device may not be so quick, and even just a 10ms delay will ruin the fidelity of a lot of reverb algorithms.



#14 BGB   Crossbones+   -  Reputation: 1545

Like
0Likes
Like

Posted 19 March 2013 - 07:42 PM

32 sources compared to hundreds of sources with proper occlusion and implicit environmental effects (I.e. they echo if they happen to be next to a stone wall, not because you explicitly told the source to use the 'stone room' effect) is an unimaginably huge difference. Audio really has been stagnating.

A lot of people have shitty PC speakers, yeah, but a lot of people also have cinema-grade speakers and/or very expensive headsets. Surround sound headsets are becoming very common with PC gamers at least.

Is it possible that in the future, instead of a dedicated audio processing card, we'll just be able to perform our audio processing on the (GP)GPU?

I had considered this before (using GPU for some audio tasks), but haven't done much with this.


probably, a person doesn't need to realistically calculate every sample, but many effects (echoes, muffling, ...) can be handled by feeding the samples through an FIR (or IIR) filter.

the problem then is probably mostly the cost of realistically calculating and applying these filters for a given scene.

possibly, some of this could be handled by enlisting the help of the GPU, both for calculating the environmental effects, and possibly also for applying the filters (could possibly be handled using textures and a lot of special shaders, or maybe OpenCL, or similar).

I have a few ideas here, mostly involving OpenGL, but they aren't really pretty. OpenCL or similar could probably be better here...


in my case, for audio hardware, I have an onboard Realtek chipset, and mostly use headphones.

#15 TheChubu   Crossbones+   -  Reputation: 3617

Like
0Likes
Like

Posted 20 March 2013 - 09:09 AM

Is it possible that in the future, instead of a dedicated audio processing card, we'll just be able to perform our audio processing on the (GP)GPU?

I've seen a few VSTs for real time processing (convolution reeverbs if I recall correctly) being accelerated with CUDA. I dunno how well they would work on a videogame.

 

Searched for "cuda vst" in Google and some things turn up. http://www.liquidsonics.com/software_reverberate_le.htm


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#16 GeneralQuery   Crossbones+   -  Reputation: 1263

Like
0Likes
Like

Posted 20 March 2013 - 09:18 AM

Is it possible that in the future, instead of a dedicated audio processing card, we'll just be able to perform our audio processing on the (GP)GPU?

I've seen a few VSTs for real time processing (convolution reeverbs if I recall correctly) being accelerated with CUDA. I dunno how well they would work on a videogame.

 

Searched for "cuda vst" in Google and some things turn up. http://www.liquidsonics.com/software_reverberate_le.htm

The latency is not such a problem for audio engineering but becomes problematic for real-time interactive applications.



#17 TheChubu   Crossbones+   -  Reputation: 3617

Like
0Likes
Like

Posted 20 March 2013 - 09:23 AM

 

Is it possible that in the future, instead of a dedicated audio processing card, we'll just be able to perform our audio processing on the (GP)GPU?

I've seen a few VSTs for real time processing (convolution reeverbs if I recall correctly) being accelerated with CUDA. I dunno how well they would work on a videogame.

 

Searched for "cuda vst" in Google and some things turn up. http://www.liquidsonics.com/software_reverberate_le.htm

The latency is not such a problem for audio engineering but becomes problematic for real-time interactive applications.

How much is too much latency?

 

At least from what I've seen latency is a problem in audio engineering and music production, people prefer to work with DAWs with <10ms latency for maximum responsiveness (specially when dealing with MIDI controllers). 10ms is too much?


Edited by TheChubu, 20 March 2013 - 09:23 AM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#18 GeneralQuery   Crossbones+   -  Reputation: 1263

Like
0Likes
Like

Posted 20 March 2013 - 10:15 AM

 

 

Is it possible that in the future, instead of a dedicated audio processing card, we'll just be able to perform our audio processing on the (GP)GPU?

I've seen a few VSTs for real time processing (convolution reeverbs if I recall correctly) being accelerated with CUDA. I dunno how well they would work on a videogame.

 

Searched for "cuda vst" in Google and some things turn up. http://www.liquidsonics.com/software_reverberate_le.htm

The latency is not such a problem for audio engineering but becomes problematic for real-time interactive applications.

How much is too much latency?

 

At least from what I've seen latency is a problem in audio engineering and music production, people prefer to work with DAWs with <10ms latency for maximum responsiveness (specially when dealing with MIDI controllers). 10ms is too much?

Latency in a DAW is not a problem (I'm not talking about midi latency but the latency between what is heard), even a few hundred milliseconds is certainly liveable. The problem with real-time, interactive applications like games is that the latency between what is seen and what is heard will pose problems and ruin the illusion.



#19 samoth   Crossbones+   -  Reputation: 4466

Like
1Likes
Like

Posted 20 March 2013 - 10:22 AM

10ms

I'm no expert, but considering the speed of sound (ca. 300 m/s) and the size of a head (ca. 0.3 m), the difference between "sound comes from far left" to "sound comes from far right", which is pretty much the most extreme possible, is 0.5 ms. The ear is able to pick that up without any trouble (and obviously, it's able to pick up much smaller differences, too -- we are able to hear a lot more detailled than just "left" and "right").

 

In that light, 10ms seems like... huge. I'm not convinced something that coarse can fly.

 

Of course we're talking about overall latency (on all channels) but the brain has to somehow integrate that with the visuals, too. And seeing how it's apparently doing that quite delicately at ultra-high resolution, I think it may not work out.


Edited by samoth, 20 March 2013 - 10:25 AM.


#20 Olof Hedman   Crossbones+   -  Reputation: 2618

Like
0Likes
Like

Posted 20 March 2013 - 11:27 AM

10ms

I'm no expert, but considering the speed of sound (ca. 300 m/s) and the size of a head (ca. 0.3 m), the difference between "sound comes from far left" to "sound comes from far right", which is pretty much the most extreme possible, is 0.5 ms. The ear is able to pick that up without any trouble (and obviously, it's able to pick up much smaller differences, too -- we are able to hear a lot more detailled than just "left" and "right").

 

In that light, 10ms seems like... huge. I'm not convinced something that coarse can fly.

 

Of course we're talking about overall latency (on all channels) but the brain has to somehow integrate that with the visuals, too. And seeing how it's apparently doing that quite delicately at ultra-high resolution, I think it may not work out.

 

 

If all sounds are delayed the same, I think it might work. 10ms means it starts while the right frame is still displaying.

You usually have some delay in all soundsystems from when you tell it to start playing until it plays, but I don't know how long it usually is... Longer on mobile devices at least.

As long as it's below 100ms or so, I think most people will interpret it as "instantaneous".

 

Phase shifts and such in the same sound source reaching both ears is another thing.

 

It would be pretty easy to test...

 

Edit:

Also, to simulate sound and visual-sync properly, you should add some delay. If someone drops something 3m away, the sound should be delayed 10ms.

 

I think this is good news. This means a minimum delay of 10ms just means you can't accurately delay sounds closer then 3m, but that shouldn't be much problem, since 3m is close enough that you wouldn't really notice it in real life either.


Edited by Olof Hedman, 20 March 2013 - 11:35 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS