Ive just worked out how I can do this! (im gonna have a terrible time explaining how this darn fangled thing works, but trust me its dead simple.)
Just say I keep the high frequency layer (of the quad tree) just rolling through.
Each layer of the quadtree is 2x2 pixels and 4 frames worth of video.
then the layer after I then randomize (mutate) the spacial state, to be a part of any 4 frame section (temporal state) it can be any of the 4 frames.
then I pass this to the next layer, and now I need to find spacial states that have any of these temporal states as an input into them (using the proximal link)
and this way I can nullify off permutations that werent a part of the original record, this way I can stop it from splitting off into noise!
Because there is always less states the further up the quadtree you go, it pretty much makes sense that we are going to knock a lot of noisy options out.
I keep going till I basicly hit 2 possibilities, then I randomize between them, and I get an infinite playback out of the linear record.
Past this is the breaking point, and some video might not break that nicely (your better off with a still camera position, im pretty sure) but it is guaranteed to keep it interesting and a fresh play every time.
Actually, going for 2 possibilities every frame might be a little too much ,maybe every 32nd frame would produce interesting enough results already, not sure.
If I was adding controls, I would keep finding possibilities, and stop feeding forward as soon as I didnt get the control option I needed, doing that would probably garble the screen cause that would theoretically be touch sensitive per frame. (which would be a little too good to be true)
So it gathers as far as it can with keeping to the constraints.
Its a lot like a chess algorythm, you start with many possibilities and you slowly knock them off one at a time until you get the desired result.