sound engine (multiple sounds simultaneously?)

Started by
8 comments, last by DekuTree64 14 years, 10 months ago
Hello, I want to write my own sound engine as a hobby project. Right now I'm messing around with the windows waveOut functions from winmm.lib. I don't want to go any higher level than that (no openAL or other sound libraries, for example.) Is winmm the way to go for maximum control? Or is there something lower level that I can take advantage of? So far, I can play sound and use buffers to stream audio. I also wanted to know, how are multiple sounds usually played at the same time? Is software mixing always used, or is there a number of hardware channels you can take advantage of before you must use software mixing? How is that stuff done, in general? I'm pretty new to this. I just have a bit of high level programming experience, so I don't really have a feel for how hardware is treated in Windows programming. Thanks!
Advertisement
Quote:Original post by bogfrog
I want to write my own sound engine as a hobby project. Right now I'm messing around with the windows waveOut functions from winmm.lib. I don't want to go any higher level than that (no openAL or other sound libraries, for example.)


...okay. That seems like some rather highly constrained requirements, but it's your time to waste.

Quote:
Is software mixing always used, or is there a number of hardware channels you can take advantage of before you must use software mixing?


To my knowledge, any vaguely modern soundcard will have some hardware channels for use.

Quote:
How is that stuff done, in general? I'm pretty new to this. I just have a bit of high level programming experience, so I don't really have a feel for how hardware is treated in Windows programming.


I'm not sure how many people might know this. Most everyone has some high level programming experience, and sound libraries are some of the most fully fleshed out options available. You might be better off getting a hold of those who've worked on other libraries (or even those mucking about with the complex features of them). OpenAL or FMOD are good candidates.
Quote:Original post by bogfrog
Is winmm the way to go for maximum control? Or is there something lower level that I can take advantage of?


winmm is no more low-level than, say, DirectSound or the next equivalent. Basically you want some API that abstracts away any hardware specific issues (like opengl or direct3d does). The waveOut functions do this, but there are certainly some layers below it that are more low level. But then eventually, you would, of course, have to deal with hardware-specific issues... and imagine how many different sound hardwares there are.......

As far as maximum control? WinMM will give you maximum portability (amongst windows OSes) if that's what you mean. You will have to do everything yourself in software, and any hardware acceleration you could use will most likely not be taken advantage of. However, most sound libraries these days are leaning back towards software mixing, so this isn't necessarily a bad thing, and is much more flexible.

With winmm and the waveout stuff, if system sounds play in windows, then your stuff will too (assuming you use it correctly). Simple as that. But also note that winmm will tend to give you latency times of 60ms or more on average, so it's not that great for real time gaming... but if done right it could be useful for playing sample-based music for instance (like tracker mods from "back in the day")

Anyway if you really intend to try to do this, winmm would be a very good fallback interface for compatibility purposes, but you should look at other interfaces to sound hardware as well (whether it be directsound, openal-used-only-for-primary-sound-buffer-access, or whatever).

Also... as a final suggestion, it may be better to, instead of trying to duplicate what openal and fmod etc do, come up with a better solution for dynamic music for instance. Anyway have fun and good luck.
Quote:Original post by popsoftheyear
winmm is no more low-level than, say, DirectSound or the next equivalent. Basically you want some API that abstracts away any hardware specific issues (like opengl or direct3d does). The waveOut functions do this, but there are certainly some layers below it that are more low level. But then eventually, you would, of course, have to deal with hardware-specific issues... and imagine how many different sound hardwares there are.......


Thanks for that info. What would be the lowest level way to do it? Did the openAL programmers for example, write a bunch of code for a ton of specific sound hardwares? In other words, did they read specs for these sound cards and then write a bunch of code that writes directly to these sound cards?

I'm not so ambitious that I think I'm going to make something that competes with openAL or whatever, I'm just wondering how they work. It would be fun to see if I could do it just for my own hardware setup, for example.

Quote:it's not that great for real time gaming... but if done right it could be useful for playing sample-based music for instance (like tracker mods from "back in the day")


The memories. :) .s3m's, .xm's, .mod's. I played with all that stuff back in the day. "Scream tracker" hahaaha

Actually the last time I did any low level programming was for DOS. So I'm sort of coming from that context. I remember having lots of control over the hardware. Coming to Windows I'm sort of confused about where all the low level stuff went.

I'm curious how that stuff works in the Windows environment.
Dealing with hardware is the kernel's job.
The kernel then provides an unified interface to all hardware of the same type, and all that has to be done to support new hardware is write an implementation of said interface for the hardware, which is what is known as a driver.

Quote:To my knowledge, any vaguely modern soundcard will have some hardware channels for use.

From my experience, only actual audio cards support hardware mixing.
Onboard audio doesn't support it for example.

Quote:What would be the lowest level way to do it? Did the openAL programmers for example, write a bunch of code for a ton of specific sound hardwares? In other words, did they read specs for these sound cards and then write a bunch of code that writes directly to these sound cards?

A part of OpenAL is EFX, which is made by Creative and provides hardware acceleration for their own cards.
EFX is the only thing that accesses the hardware directly here.
Quote:Original post by loufoque
A part of OpenAL is EFX, which is made by Creative and provides hardware acceleration for their own cards.
EFX is the only thing that accesses the hardware directly here.


So that's it? For example, my motherboard has an onboard realtek audio chipset that supposedly supports 8 hardware audio channels. Those 8 channels are never used, since it's not a Creative product? Everything would be software mixed?

How often are the hardware channels on my realtek chipset even used?


Hardware mixing is done by the kernel audio driver. At least on linux (it's done by ALSA), I don't know about windows (but I suppose it's the same).

Quote:For example, my motherboard has an onboard realtek audio chipset that supposedly supports 8 hardware audio channels. Those 8 channels are never used, since it's not a Creative product? Everything would be software mixed?

I have no idea.

The creative stuff is only for hardware acceleration of sound effects, not mixing.

[Edited by - loufoque on May 31, 2009 5:12:02 PM]
If you just want to setup an audio stream with a ring buffer and mixer callbacks then check out PortAudio or RtAudio. That's what I use in my software audio library (software mixer and a mod, mtm, s3m, xm replayer).

Have fun!
[size="1"]Perl - Made by Idiots, Java - Made for Idiots, C++ - Envied by Idiots | http://sunray.cplusplus.se
if you are new to audio programming i sugest you start with fmod!Its a very easy audio api and will give you less headaches than directsound while being cross-platform
I got started with sound programming on Gameboy Advance. There you actually get to mess with hardware directly, and there are only 2 hardware PCM channels, which are generally used for playing left and right software mixed buffers. I even wrote a tutorial on it a while back, if you want to check that out.

Now I do basically the same thing on PC, using SDL to feed my buffers to the hardware. Sunray's library suggestions sound good too, if you don't want all of SDL.

My general rule of thumb for "how low level is low enough" is when you get access to PCM buffers. Timing of callbacks and feeding of buffers to hardware is usually when things get hardware specific, so if you handle everything above there, that's about as portable as it gets.

This topic is closed to new replies.

Advertisement