Quote:Original post by Charles BQuote:
... by making more calls to UpdatePhysics.
Yes, it is actually required if you want to have a constant accuracy whatever the angular speed..
But it does not have to be a full integration necessarilly. To be more precise one can still subdivide the physics in two functionalities :
- The dynamics integration (may be 50FPS). With updated forces, forces that might depend on the first order body state, example : damping.
- The kinetics integration that supposes both linear accelerations and angular velocities (or accelerations) are constant during 1/fq.
What counts for me kinetic part. What I want is to be able to retrieve State(Body)(t) enough times. Depending on the implementation, using quaternions and slerps maybe, it should handle cycloid vertex movements accurately. (Imagine a dart thrown. You want to know which part hits the wall, on the blade end or not). I suppose these computations do not approximate sinus and cosinus. Else it's the problem of the integration, not the prediction.
I mentionned a context where dynamic parameters (accelerations and angular velocities) are bounded as an example. Then the physics fq (both dynamics and kinetics all at once) can be set once for all for a given precision (at compile time for instance) and the prediction also works mainly at the same fq. The interval of search is the same as the physics frame except the tmax optimization I have mentionned.
I also said that in the general case the (kinetic) fq could be locally adaptable. Think of typical engines, that manage multiple constraints and collisions they also request more physics updates from time to time. (And usually they do it the whole scene which causes huge stalls). Also consider that when you deal with contacts, you also have to call the integration. There is nothing new here. Still using prediction instead of detection warrants that 400FPS is almost always sufficient for 24bits of precision ! (40FPS for 10bits). Plus in each interval, operations are done in quasi-linear time.
So let's take a context where the upper limitation of w is not known. A vertex path can then be a portion of cycloid during your 10 milliseconds (say fq=100Hz).
If you keep you physics (integration) frequency constant and only subdivide the prediction part. Then the quadratic approximation of the kinetics between two integration steps might be too unprecise. As I explained it, with rotations, the metric error grows in t cube ! So if you don't call back more integration steps and retrieve 'fresh' body states, whatever the number of subdivisions you'll make in your collision prediction (your stance if I understood you well), this error exists. And it may well be not acceptable.
It's basically the math I have studied in details that tell to beware in choosing the right physics (kinetic) update frequency. In exact collision prediction the physics and collision frequency are the same.
yes, indeed, we now both understand eachother i think. however, what do you mean by not being able to archieve sufficient precision with a large timestep? (thats MY holy grail to be exact, algorithmic optimality comes a close second for me :))
in my case youre indeed looking at an arbitrary cycloid movement: the distance function can in theory be highly fluctuative. however, if you decide on the amount of timestep subdivisions at this moment instead of at the highest level, youre able to pick the right amount of subdivision of the interval in seperate quadatics, instead of the same amount between all objects even where its really not needed. indeed it requires finding a good condition on which to base your size of timestep, but i think it should be possible, and certainly worth the effort. more on this later.
Quote:Quote:
... simply calcing the distance on THREE timeintervals (does require one more state evaluation, quat->matrix (ouch) though) ...
But this does not change the required physics fq for a given context and precision tolerance.
This is a possibility, it's just a matter of implementation and interface between the physics and the collision prediction modules. But it also heavilly impacts on the underlying algorithms :
Here is what you do :M0 Mhalf M1-|--------------|---------------|--> t 0 0.5 1
- You do not need to retrieve full body states. Just 3 frames cached, giving you any point at t=0, t=0.5 and t=1. You need a small compuation to get the quadratic (easy). The problem to me is it requires 3 instants of physics (or just kinetic) evaluation.
- When you request P(t) you need 3 matrix*vert plus the construction of the vertex quadratic.M0 +M1*t+M2*t*t-|------------------------------|--> t 0 1
- For the same time step, I just no body state in advance. A quadratic matrices just requires one body state at t=0.
- When you want P(t) you need three matrix*vert
- I can also get V(t) with two matrix*vert.
- Crucial : I can get the path of Pb(t) relative to the kinetic frame of the other solid.. This is done with 3 matrix multiplications for the whole interval and once. Cf my solution for moving faces against moving vertices.
Both solutions are just equivalent in theory but the last crucial point. Yours gives the polynomial by 'control points' and mine by its coefficients more directly. It's mainly a question of convenience and implementation. It depends on your engine architecture. But think of the face-vert case again.
yes, thats a good summation of both our methods. mine does require more state evaluations, but as you rightly point out you can cache one state-matrix so two evaluations/frame are all thats needed, and the evaluation of the atate at t=1 is needed anyway.
so its a case of one extra state evaluation vs finding the first and second derivatives of the distance function to construct a taylor polynome. again im not sure how you plan to go about finding these derivatives in an elegant manner for all different cases. one extra state evaluation sure seems easier to me, if potentially slower. besides the error function has three roots at t=0, 0.5, 1 for my method, giving an quite equally spread out error, whereas with a taylor polynome the error would be distributed less favorably.
besides i had an idea regarding the adaptation to smaller timesteps where needed and this interpolation method. aside from the state at the end of the last frame, the derivative found there with the use of the quadatic approx can also be cached. if this derivative is too different from the one found using the quadratic at the start of this frame, our approximation is poor, so we split the interval in two, do two new state evaluations at t=1/4&3/4 and recurse.
you can recuse up the the precision of 400hz, you can go even further, but most of the time you will have sufficient precision without any recursion.
Quote:
Last word :
On the largest scale, to me the goal of a physics engine mostly based on prediction is to have a computational complexity linear to the number of objects (inputs) linear to the discrete events it has to treat (outputs) : the changes of contact states, impulsions, start/ends of constraints, triggered events (ex : enter teleport). The goal is thus algorithmic optimality.
as i said, i certainly think linear time complexity and efficiency is an important goal. however, to me, robustness and elegance (no hacks like capping speeds) are even more important.
ideally we could have both :)