In the process I've written a multi-threaded system which runs a fixed timestep physics system and a maximum-rate renderer. It works in that it's indepedent of framerate (or of the timer I use to kick the physics system, actually) but it seems to be behaving a bit oddly (i.e. works because my testcase is simple but would break down in a real situation).
It's modelled on what I learned in the Intel Hyperthreading talk at EDF last september. Your gamestate updater is run at a fixed time interval, making things like networking somewhat easier; object states are double-buffered (so you have the state for the current and previous iterations of the system). The renderer runs on a seperate thread, and when it wants to render an object it basically intepolates between the two sets of data. This allows your renderer to run through things 3 times between updates and give smooth movement, or to have updates run several times before the GPU is ready for the next frame.
The problem is that my rendering thread seems to be requesting data for frames a long way ahead of what I've computed (like, 10 seconds) which suggests that my updater code is getting left behind.