Multimedia Timer for Linux

Started by
10 comments, last by Dwiel 17 years, 1 month ago
So. I've got a program that I've been working on which uses the Multimedia timer in Windows and would like to port it to linux. The program is used to playback MIDI and other very time sensitive events. The way I had it set-up before, the timer was set to check a queue every ms for messages that needed to be sent that ms. At a high level, what I really need is the ability to efficiently do something like this:

//          ms, callback
DoThisLater(4, function)
DoThisLater(8, function)
DoThisLater(11, function2)


Does anybody know how this can be accomplished without polling the timer? I can't be wasting CPU cycles. I've got a lot of other processes that need all the CPU time they can get. P.S. I am also looking for a solution that will work on OS X. Thank You Dwiel
Advertisement
I'd use:
gettimeofday() and usleep()
with some priority queue for scheduling events.
I second deffer's choice.

Use of gettimeofday should produce time information accurate to 1 microsecond on (some) Linux machines. Certain architectures might not have it available that accurately, in which case you will see it increase in larger increments than 1us.

Setting up a priority queue of events is pretty trivial. Then just have the thread sleep (e.g. with usleep) until the first event in the queue.

Mark
SDL provides exactly what you need. Not only that, you can use the exact same code on Windows, Linux, and Mac OS X. That's the thing about portability libraries, they let you write portable code.

You will need to write some glue code for SDL, however, since you need to be able to tell SDL to fire off timer events from the main event loop. Most people (hence most example code you'll find) seem to like polling the timer and interweaving event loop calls and timer poll calls. Trust me: this will lead to bad news and headaches down the road. You want one loop to rule them all and in the darkness bind them.

--smw

Stephen M. Webb
Professional Free Software Developer

Bregma: I checked out the SDL Timers first actually because I already was using the Video and Threads for the exact cross-platform reasons you provided. However, All of the documentation that I found said that the sleep call (and timers in general http://www.libsdl.org/intro.en/usingtimers.html ) were not guaranteed to return at the specified time, at small time intervals (< 10 ms on most machines) will delay for at the very least, one time slice. I vaguely remember verifying this in windows. Is there some way to get around this?

It looks like gettimeofday and usleep are probably the way to go on linux. Does anyone know if they will work for OSX? That combination seem to make porting what I was doing before with the 1ms callback and a priority queue fairly trivial. I found someone else who was limited to about 4ms in linux due to the 250hz system clock thing, but was able to get around it with some extra hacking and by setting the process as scheduled real-time with a high priority.

I'll see how these work out.

Again, Bregma, if you know how to get past the limitations I mentioned earlier, it would definitely make this whole thing a lot easier.

Thanks

Quote:Original post by Dwiel
Again, Bregma, if you know how to get past the limitations I mentioned earlier, it would definitely make this whole thing a lot easier.


No. My use of SDL is limited to what I need in games, and I never need a millisecond-resolution timer. I would consider millisecond resolution to be hard realtime. I have no good experience using Unix for realtime (although I could offer you some bad experience, and I don't want to go there).

The realtime Unix folk these days seem to enjoy using the realtime signals and sigtimedwait() or sigwaitinfo().

On the other hand, much time-sensitive stuff is often imlemented at the driver level and the userspace API just feeds data into buffers. Userspace timing is just too vague in Unix to be reliable at the microlevel.

--smw

Stephen M. Webb
Professional Free Software Developer

I'll check those functions out too.

Thanks
ended up using clock_gettime and clock_nanosleep. I was unable to get past the 4ms barrier although it turns out this was acceptable (so far), so I didn't go through as much as I could have. I did try setting real-time settings both from within the code using sched as well as with the set_rlimits utility. Neither worked. Hopefully it will not become an issue. I noticed that Hydrogen, the drum sampler software, also gives me a warning message saying that it can't find a clock faster than 250hz, so I guess it couldn't figure out how to do it either. (although they do have to make it all generic - I would like to also)

Hi Dwiel,

I believe your problem comes down to the timer frequency your kernel was compiled with. There are (with Gentoo at least) three possible options: 100Hz (for servers), 250Hz and 1000Hz (desktop machines). It could be that you (or your distribution) chose the 250Hz option as a compromise between responsiveness and problems caused by too many interupts.

The kernel compile option is available under "Processor type and features" and then "Timer frequency" (or to set it manually the symbols are CONFIG_HZ_100, ... etc).

Hope this helps.
Timer tick frequency and sleep granularity are different things.

When a usleep() finishes, the thread will be ready to run again. This does not mean it will run because:

- If the scheduling is happening in tick interrupts, one might not have happened yet
- Some other process may be on the CPU

So in practice, the soonest it's likely to happen is the next schedule after the usleep() is due to finish.

The very latest Linux kernels (2.6.21 +) may have a feature called dyntick which allows timeouts to potentially be very accurate indeed (on some architectures) without creating much overhead. This feature is not likely to be available on other OS but will be used transparently if it is.

Mark

This topic is closed to new replies.

Advertisement