I am currently implementing a 3D interaction technique to manipulate objects. The first step is to enable 2DOF translation across the camera plane. The basic approach I used it to compute the intersection of the camera ray to the camera plane. I then use the intersection to determine where to place the object to. Just for clarity, if the user starts dragging a cube from, say the bottom left corner, then the offset will be taken into account.
Now, I'm not sure if there's anything wrong with my code but there seems to be noticeable variations in the resulting plane intersection point even though the original mouse ray comes from just a few pixel apart. Here's a brief output from my log to show what I mean:
[11:06:08] Info (UI) P[655,553] - T[0.22,0.09,0.00] [11:06:09] Info (UI) P[655,552] - T[0.22,0.27,0.00] [11:06:09] Info (UI) P[656,552] - T[0.24,0.27,0.00] [11:06:09] Info (UI) P[656,551] - T[0.24,0.18,0.00]
As you can see, the second line is just one pixel above the first entry however the resulting intersection is 0.27 instead of 0.09. Visually this results in the object acting like it is jumping around. I hope this has something to do with precision issues. What I think would help is that if instead of using an absolute position to move the object each frame, it would be better to move the object towards the new position, instead of "teleporting" it to the new location.
I guess this could be done by interpolating the current position towards the new position. However I am not sure what kind of step parameter to use for the interpolation. Because in this situation I don't need it to arrive at its destination in a fixed amount of time, I just need to add the missing positions between frames.