Jump to content
  • Advertisement
Sign in to follow this  

Timing sound when being generated in software

This topic is 4658 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm writing an emulator, and so need to emulate graphics and sound hardware. The timing is really confusing me when it comes to sound, however. The way I'm handling sound is to use the conventional method - a callback function that is called by the sound API (in this case, FMOD Ex) that fills a buffer with a number of samples. Said samples are generated by software emulation of the Master System's sound hardware. I've tried two methods to handle timing: Elaborate method Writes to the sound hardware are queued up, along with a timestamp (in this case, the number of emulated CPU cycles that have been executed). When the callback function to fill the sound buffer is called, I know three things:
  • The number of emulated CPU cycles that have been executed thus far.
  • The number of emulated CPU cycles that had been executed the last time this callback function was called.
  • A queue of sound hardware writes with times.
I can now generate the samples required, spacing hardware writes by linear interpolation of the events in the queue by the total length of the list of samples, and running the emulated sound hardware for a few ticks per sample. The problem with this technique is that for some reason the unevenness of calls to the callback means that the sound hardware is updated in lumps (about every half second). To illustrate: wwwwwwwwwwwwCCCCCCCCCCCwwwwwwwwwwwwCCCCCCCCCCC... ...where 'w' is a write to hardware, and 'C' is a call to the callback. All writes end up being lumped into the first callback 'C'. It sounds like this as opposed to this - the latter uses the second method. Simpler method Handle all emulation timing in the callback. That is, when the callback is called, mix running the emulator for a few CPU cycles with running the sound hardware for a few cycles as samples are generated. This produces perfect audio, but choppy video as the video now only gets updated eratically. It's also dependant on the slow emulation of the whole system, not just on the fast emulation of the sound hardware. Any bright ideas?

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!