Sign in to follow this  

Multimedia Timer for Linux

This topic is 3924 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So. I've got a program that I've been working on which uses the Multimedia timer in Windows and would like to port it to linux. The program is used to playback MIDI and other very time sensitive events. The way I had it set-up before, the timer was set to check a queue every ms for messages that needed to be sent that ms. At a high level, what I really need is the ability to efficiently do something like this:
//          ms, callback
DoThisLater(4, function)
DoThisLater(8, function)
DoThisLater(11, function2)


Does anybody know how this can be accomplished without polling the timer? I can't be wasting CPU cycles. I've got a lot of other processes that need all the CPU time they can get. P.S. I am also looking for a solution that will work on OS X. Thank You Dwiel

Share this post


Link to post
Share on other sites
I second deffer's choice.

Use of gettimeofday should produce time information accurate to 1 microsecond on (some) Linux machines. Certain architectures might not have it available that accurately, in which case you will see it increase in larger increments than 1us.

Setting up a priority queue of events is pretty trivial. Then just have the thread sleep (e.g. with usleep) until the first event in the queue.

Mark

Share this post


Link to post
Share on other sites
SDL provides exactly what you need. Not only that, you can use the exact same code on Windows, Linux, and Mac OS X. That's the thing about portability libraries, they let you write portable code.

You will need to write some glue code for SDL, however, since you need to be able to tell SDL to fire off timer events from the main event loop. Most people (hence most example code you'll find) seem to like polling the timer and interweaving event loop calls and timer poll calls. Trust me: this will lead to bad news and headaches down the road. You want one loop to rule them all and in the darkness bind them.

--smw

Share this post


Link to post
Share on other sites
Bregma: I checked out the SDL Timers first actually because I already was using the Video and Threads for the exact cross-platform reasons you provided. However, All of the documentation that I found said that the sleep call (and timers in general http://www.libsdl.org/intro.en/usingtimers.html ) were not guaranteed to return at the specified time, at small time intervals (< 10 ms on most machines) will delay for at the very least, one time slice. I vaguely remember verifying this in windows. Is there some way to get around this?

It looks like gettimeofday and usleep are probably the way to go on linux. Does anyone know if they will work for OSX? That combination seem to make porting what I was doing before with the 1ms callback and a priority queue fairly trivial. I found someone else who was limited to about 4ms in linux due to the 250hz system clock thing, but was able to get around it with some extra hacking and by setting the process as scheduled real-time with a high priority.

I'll see how these work out.

Again, Bregma, if you know how to get past the limitations I mentioned earlier, it would definitely make this whole thing a lot easier.

Thanks

Share this post


Link to post
Share on other sites
Quote:
Original post by Dwiel
Again, Bregma, if you know how to get past the limitations I mentioned earlier, it would definitely make this whole thing a lot easier.


No. My use of SDL is limited to what I need in games, and I never need a millisecond-resolution timer. I would consider millisecond resolution to be hard realtime. I have no good experience using Unix for realtime (although I could offer you some bad experience, and I don't want to go there).

The realtime Unix folk these days seem to enjoy using the realtime signals and sigtimedwait() or sigwaitinfo().

On the other hand, much time-sensitive stuff is often imlemented at the driver level and the userspace API just feeds data into buffers. Userspace timing is just too vague in Unix to be reliable at the microlevel.

--smw

Share this post


Link to post
Share on other sites
ended up using clock_gettime and clock_nanosleep. I was unable to get past the 4ms barrier although it turns out this was acceptable (so far), so I didn't go through as much as I could have. I did try setting real-time settings both from within the code using sched as well as with the set_rlimits utility. Neither worked. Hopefully it will not become an issue. I noticed that Hydrogen, the drum sampler software, also gives me a warning message saying that it can't find a clock faster than 250hz, so I guess it couldn't figure out how to do it either. (although they do have to make it all generic - I would like to also)

Share this post


Link to post
Share on other sites
Hi Dwiel,

I believe your problem comes down to the timer frequency your kernel was compiled with. There are (with Gentoo at least) three possible options: 100Hz (for servers), 250Hz and 1000Hz (desktop machines). It could be that you (or your distribution) chose the 250Hz option as a compromise between responsiveness and problems caused by too many interupts.

The kernel compile option is available under "Processor type and features" and then "Timer frequency" (or to set it manually the symbols are CONFIG_HZ_100, ... etc).

Hope this helps.

Share this post


Link to post
Share on other sites
Timer tick frequency and sleep granularity are different things.

When a usleep() finishes, the thread will be ready to run again. This does not mean it will run because:

- If the scheduling is happening in tick interrupts, one might not have happened yet
- Some other process may be on the CPU

So in practice, the soonest it's likely to happen is the next schedule after the usleep() is due to finish.

The very latest Linux kernels (2.6.21 +) may have a feature called dyntick which allows timeouts to potentially be very accurate indeed (on some architectures) without creating much overhead. This feature is not likely to be available on other OS but will be used transparently if it is.

Mark

Share this post


Link to post
Share on other sites
Thanks for the info. I am currently using ubuntu edgy and have confirmed that the system clock is 250hz. I think what I am going to do is change distros to the ubuntu studio (for audio and video work) when it comes out. My guess is that it will have the higher system clock frequency.

I haven't seen the dyntick feature around, I'll have to check it out.

Thanks

Share this post


Link to post
Share on other sites
That dyntick definitely looks useful. I spent most of today getting my application working smoothly on Mac OS X and didn't end up with anything too acceptable. I ended up switching the sleep to nanosleep() instead of clock_nanosleep() and used gettimeofday() instead of clock_gettime(). This got it close, as the timer seemed to be 1000hz if not higher. However, the timing was not consistent. I would get fairly periodic jumps where I would ask for 1-4ms and get held back for 10-20 a few requests in a row. This was after I changed the priority to realtime (priority 96-128). I am not entirely sure that my priority was getting set right...

I was using a function I found here:

// Reschedules the indicated thread according to new parameters:
//
// machThread The mach thread id. Pass 0 for the current
thread.
// newPriority The desired priority.
// isTimeShare false for round robin (fixed) priority,
// true for timeshare (normal) priority
//
// A standard new thread usually has a priority of 0 and uses the
// timeshare scheduling scheme. Use pthread_mach_thread_np() to
// to convert a pthread id to a mach thread id
kern_return_t RescheduleStdThread( mach_port_t machThread,
int newPriority,
boolean_t isTimeshare )
{
kern_return_t result = 0;
thread_extended_policy_data_t timeShareData;
thread_precedence_policy_data_t precidenceData;

//Set up some variables that we need for the task
precidenceData.importance = newPriority;
timeShareData.timeshare = isTimeshare;
if( 0 == machThread )
machThread = mach_thread_self();

//Set the scheduling flavor. We want to do this first, since doing
so
//can alter the priority
result = thread_policy_set( machThread,
THREAD_EXTENDED_POLICY,
&timeShareData,
THREAD_EXTENDED_POLICY_COUNT );

if( 0 != result )
return result;

//Now set the priority
return thread_policy_set( machThread,
THREAD_PRECEDENCE_POLICY,
&precidenceData,
THREAD_PRECEDENCE_POLICY_COUNT );

}



However, when I compiled this, after adding some includes, I was told that thread_policy_set expected an integer_t* as a 3rd parameter instead of the two types I was giving it. (the types I was giving it are what is documented all over the internet) So I changed the types and let the compiler cast both an int and a bool to integer_t types. In retrospect this seems like the most probably cause for my problems. Does anyone know what is going on here?

Or maybe, there is a better way to be doing this on OS X.

Thanks

Share this post


Link to post
Share on other sites

This topic is 3924 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this