What advantage would such a system have over animating like a normal video game does? Other than perhaps allowing for really realistic graphics -- given it's actually a spliced-together video of real events -- isn't this really just limiting what can be included and the possible inputs? The realism might also be damaged if you weren't extremely careful about maintaining exact positioning, angles, lighting conditions, etc. for each alternative, or if your blending between alternatives wasn't absolutely perfect.
It also seems like you could potentially end up with a rather enormous file-size if you elected to provide footage for a non-trivial number of events, and given you need to record live action for every option you want to include it would probably be very time consuming to create your content.
Interesting idea, but I just don't see how it would actually be useful -- if I'm understanding correctly it seems like a non-cost-effective way of producing lower-quality content that has more limitations.
you can have multiple frames playing at the same time in different parts of the screen.
We can do that pretty easily with traditional animation.
and you have a balance of how "flicky" the changes are, and how non interactive it is, the more "flicky" it is, the more interactive, the
less flicky the less interactive.
We can do that with traditional animation if we want to as well, and it's fairly trivial -- you would just need to restrict inputs or clamp to specific discreet values.
the more you "pool" (destroy/shrink data between regions) the more itll find similarities in differences.
I'm not sure if I understand this one, could you try to explain it more clearly?
Im developing this kinda wierd new way to control video with a joystick.
Is this purely conceptual at the moment, or are you actually working on an implementation? It might be easier to understand if you could show a video demonstrating the technique in action or something...