HUD: motion compensation for multiple targets

Started by
5 comments, last by FocusedWolf 12 years, 11 months ago
Hi, i'm trying to create a sort of augmented reality for a first person shooter where information about targets appears to float on the targets, from the users point of view, i.e. the information sticks to the target, so regardless of how the user moves the tag stays centered on the target.

In my current implementation, if the user does not move or turn at all, the tags appear to follow the targets perfectly.

The problem occurs when the user moves/rotates, where the tags tend to overshoot the target in the direction of the move.

To counter for translational movement of the user, i calculate the target's tag coordinates as such:
compensated_Target_Location = target_Location - (user_Velocity_Vector * deltaTime), and that works... i.e. regardless of how the user moves, via W,A,S,D or jump keys, the tags are always correctly centered on the targets.

The second part of the problem is countering for the users mouse movements, i.e. the users angular velocity. So my question is how do i compensate for that?

An example of the problem is i see some tags on targets and everything is stationary and looks perfect, but if i move my mouse to the right the tags appear to overshoot the target to the right, following the mouse. If i pitch the mouse down, then the tags overshoot to below the target. And when i stop moving, the tags will slingshot back to the targets. So somehow i need to compensate for that delta yaw and delta pitch.
Advertisement
Wait, are you just trying to find the screen coordinates for an object? There is a lot of information out there about converting between object coordinates, world coordinates and camera coordinates (sometimes using the word "space" instead of "coordinates", although I think that's a horrible name).

For instance, I seem to remember that the OpenGL red book had a decent description of how this all works.

Hi, i'm trying to create a sort of augmented reality for a first person shooter where information about objects of interest is displayed on the users HUD centered at the origin of the objects. The problem i'm having is finding out how to account for the players angular camera movement in order to properly place the information tag.

Translational compensation was easy to solve, i.e. compensated_Target_Origin = target_Origin - (player_Velocity_Vector * deltaTime)

Angular compensation for rapid pitch and yaw movement (roll isn't a factor in this game), ... well i'm a bit stumped. I'm sure their's a solution since aircraft HUDs deal with this: http://upload.wikime..._symbology.jpeg


Let me see if I understand.

In the world frame, the player is moving, and we're going to pretend that the targets are stationary. This means that in the player frame, the player (the origin) is stationary, and the targets are moving. You want to help the player lead his shots at targets that, in the player frame, are moving. The assumption you want to make is that the player frame's translational and angular velocities w.r.t the world frame are constant. The way you want to help the player is by telling him what heading vector to shoot in; you will display this as a label on his HUD at the corresponding position.

Fair description?

The only thing I'm unsure of is whether you actually want to help the player lead shots per-se (i.e., solve the "bullet and target reach same point at same time" problem), or whether you want only to predict where the targets will appear "dt" seconds from now in the field of view.

If it's the former: I think it'll be easier to think about this problem, as I hinted, if you work in the player frame and think of the objects as moving. Then you'll find lots to help you around these forums, under headings like "leading shots," etc.

If it's the latter: You have an easier problem, and you can solve it as follows: The translational and angular velocities of an object, together, are called a twist. You can take a twist, and an amount of time, and compute a transformation (rotation and translation) that corresponds to moving with the velocities described by that twist, for said amount of time, if you started at the origin. The operation that does this is called matrix exponentiation or Lie group exponentiation and the formula to do it for the case of rotations can be found here. All you'd do is construct these transformations, apply them to your objects, and render labels at the transformed points. I can elaborate if necessary, but that's the idea.
Oh, I think I get it now. I didn't understand what you meant by "compensation" at first. So yeah, compute where you can hit the target using target leading and then project that point onto the screen.
Sorry for the confusion. I revised the first post to be more clear. The "user" is you're avatar, and everything else that i want the HUD to identify is now called "target".
So did we interpret correctly that you are trying to do target leading and that's why you need to compensate for the velocity? Do you still have a question?

So did we interpret correctly that you are trying to do target leading and that's why you need to compensate for the velocity? Do you still have a question?


I didn't think of it as target leading problem, but it is if i assume that the turning user isn't turning but rather that the stationary objects are moving. That combined with what Emergent describes in the bottom half of his post certainly gives me enough stuff to try for a while. No i don't have a question at the moment :P

This topic is closed to new replies.

Advertisement