Followers 0

# OpenGL Indicator arrow for offscreen objects

## 4 posts in this topic

Hello,

Perhaps I'm searching for the wrong terms, but I'm not quite finding an understandable answer to this question. Perhaps, I'm searching for the wrong terms, but of the ones I can find, they don't really give an good answer at all. At least one that I can make sense of. So, my genuine apologies if this question has been asked many times and I'm just failing to find their threads.

I'm trying to rotate a little 3D indicator arrow in the HUD that will point to an object, especially when the object is "offscreen." I understand how to rotate one object toward another generally speaking(in world coordinates), but I'm having difficulty since this arrow would take into account the ViewMatrix rotation, or screen space.

My initial attempts, I tried to use glm::project to parse out the screen coordinates and rotate toward those. This works great if the object is in view of the camera.

Something like this:

ProjectionMatrix= glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100000.0f);
glm::vec3 projectedObject = glm::project(objPosition, ViewMatrix, ProjectionMatrix, glm::vec4(0,0,1920,1050));

quat rotation1 = RotationBetweenVectors(glm::vec3(0,0,1), glm::vec3(projectedObject.x - 960, projectedObject.y - 525, projectedObject.z) );
glm::mat4 Rotation;
Rotation = glm::mat4_cast(rotation1);

ModelMatrix =  glm::translate(glm::vec3(0,-.5, -20)) * Rotation ;

MVP = ProjectionMatrix * ViewMatrix * ModelMatrix;

The RotationBetweenVectors function returns a quaternion representing the rotation toward the second object. From here.

But, if the object is offscreen, it doesn't work quite right, as it predictably goes back and forth as it hits the negative values. The z-axis also, predictably isn't quite working properly as glm:;project just gives a sort of positive/negative value, with little inbetween. In hindsight, I understand why this isn't working. I also tried the glm::lookat() function, but, since it doesn't take into account the current rotation of the viewmatrix, it also isn't quite working.

I can fanagle things to work a little better by doing:

quat rotation1;
if(projectedObject.z < 1)
rotation1 = RotationBetweenVectors(glm::vec3(0,0,1), glm::vec3(projectedObject.x - 960, projectedObject.y - 540, projectedObject.z) );
else
rotation1 = RotationBetweenVectors(glm::vec3(0,0,1), glm::vec3(-projectedObject.x - 960, -projectedObject.y - 540, -projectedObject.z) );

But, this is pretty inelegant, and only works for the X/Y axes, and not really the Z.

I guess I'm having difficulty quite visualizing what it is I need to do. I mean, I can visualize it, I'm just struggling to put this into an equation. But, in a nutshell, I need to take the current rotation of the ViewMatrix, and the calculate the rotation toward the destination object in screen space(or offscreen-space ). And, this is for the full spectrum of rotations as the game is in space. So, there would be no preferred "UP" vector.

Anyway, if anyone has an insight on how to go about this, I'd be truly grateful for the help. Let me know if you need clarification or additional code at all, as this was probably a little rambling.

In the meantime, I'm going to go read articles on billboarding, as I feel like it would be the same principle, just instead of always facing the camera, The object would always face the targeted object.

Thanks all,

Edited by Misantes
0

##### Share on other sites

Maybe I'm confused about exactly what it is you are trying to achieve but could you not just do all of the calculations in world space?

if this indicator is a 3D arrow and the object you are pointing at is in the world then just set the arrow to point towards the object in world space. Then each update just make sure the 3D Arrow is placed in the same place relative to the camera so that it is in the same part of the HUD.

unless I'm missing something ?

Maybe you can link a video of a game that already has this feature so there is a better idea of what you want?

Edited by JordanBonser
1

##### Share on other sites

Well, I originally tried what you suggested, but the issue is that, being part of the HUD, it doesn't exist in worldspace, per se. I could pass it the ship/player's coordinates, but those worldspace coordinates don't change when the camera/view matrix rotates. Which, is fine, I think, if I can figure out how to pass in the ModelView matrix of the object to be rotated towards. But, If I simply put in the world coordinates, and rotate the indicator towards the object, it won't matter which way I  move the camera, it will still point in the same direction (usually not the objects position in camera space) as the world space coordinates wouldn't have changed at all.

So, essentially, the indicator arrow has a sort of constant view matrix, while the object to be rotated towards uses the ViewMatrix that all the objects in the worldspace use. It's a matter of figuring out how to account for those, I think.

Here's a screenshot of a simple version of what I have working now. It's correct for the X/Y axes, and will point in the right direction regardless of where you rotate the camera/ship(using the workarounds in the original post). it's just a little confusing since it doesn't rotate toward the Z axis at all, it can be difficult to tell if you're headed quite in the right direction. You can suss it out eventually, by just following where it tells you, but it's a less than ideal solution. I'd like to be able to point along all axes, to get a better idea of where the object is in relation to where the player is facing.

Apologies for the awful graphics, it's very much still a work in progress. The indicator is the red icon below the ship (not the blue circle, that's a rotating selection thingermabobber, it looks better in motion ). Apologies for all the UI clutter. When I get the z axis working I'll need to redo the indicator arrow,as it won't really be visible as it currently stands if it's pointed toward the camera. But, that's another issue

So, right now, if, in the picture below, regardless of where I rotated, the arrow will point to the object, but only along the X/Y axes. So, if I were to just fly forward past it, the arrow would just point down(or left, or up, or whatever, until you rotated around to face it). I'd like it to be able to calculate the z axis so if I flew past it, the arrow would point back at the player indicating the object was behind you, if that makes sense.

Edited by Misantes
0

##### Share on other sites

Seems like it's even more straightforward if you want the z-axis involved too. Then your arrow indicator is just a regular 3d object that you render along with the rest of your 3d objects (or if you want it "always on top", then render it separately after clearing the depth buffer).

You just need to figure out the world matrix to apply the rotation to your arrow. You basically already have this: RotationBetweenVectors returns a quaternion and you can get a rotation matrix from that. One vector will be the "base" direction your arrow model points in, and the other will be some vector that points from an arbitrary world position* to the world position of the offscreen object.

Then use your regular view and projection matrices when drawing your arrow.

* The only subtlety then is choosing the world position for your arrow.

2

##### Share on other sites

On both your advice, I'm implementing this now. There are a few hiccups though, I'll try to update as I go along.

The first issue I'm having, is that I can't use the same ViewMatrix as the rest of the objects, as then the indicator arrow then acts as a stationary object along with everything else (it just acts as an object in the world). I can counter this by translating the indicator's ModelMatrix against it, but then, predictably, it doesn't follow when I move the camera around and acts as a world object I'm looking at/around.

Additionally, I think I may be using the RotationBetweenVectors() incorrectly. By base view, should I pass in the rotated "forward" vector (the camera forward) or is vec3(0,0,1) sufficient? The direction ought to be simply the objPosition - playerWorldCoordinates, I would think. But, this doesn't result in the arrow tracking the object at all.

Perhaps I'm calculating my View and Model matrices incorrectly. I'm passing in the movement of the player to the object's ViewMatrix, if that makes sense. The objects ModelMatrix I'm using to calculate their relative movement to each other (orbits and that sort of thing). I.e. all the objects ViewMatrix represent the player movement (if the forward key is pressed, the objects ViewMatrix is translated backward). This might not be the ideal way to do things, but was far simpler than trying to calculate their orbits and player movement together. I'm happy to post the calculation of the ViewMatrix, but it's rather a lot of code (and rather messy to boot) as it consists of the control scheme.

Thanks for your patience, I'm sure I'm explaining things poorly.

edit*

If I make a second ViewMatrix that doesn't include the player movement, and translates the indicator to where I want it on the screen after the rotation, I'm getting closer to what you're talking about. I think my model is imported a little screwy now, so I'll try to suss that out.

Hm, nope. It seemingly works, but isn't tracking the object correctly. I think perhaps I'm using the RotationBetweenVectors() wrong:

rotation1 = RotationBetweenVectors(glm::vec3(0,0,1), objPosition - player.worldCoordinates);

The, I'm passing in a ViewMatrix that is based on the rotations in the ViewMatrix that the other objects use, but without the translation to worldcoordinates.

edit**

and, I'm just an idiot my worldcoordinate values are negative, since I use them to offset the position...So, if I just call:

rotation1 = RotationBetweenVectors(glm::vec3(0,0,1), objPosition + player.worldCoordinates);

So, as long as I use a ViewMatrix that only includes the camera quaternion rotations, but not the translation, and then use the above function for the ModelMatrix rotation, everything works beautifully!

Thanks for all your advice, you guys. It's truly been appreciative. I think I would have banged my head on this like a monkey trying to stick a square peg through a round hole for another week before without your help. I had discarded this method originally as things didn't work out quite right, and don't think I would have come back to it as,  in my head, it was just seeming as though it should be more complex. But, the solution was right under my nose >.<

I wish I could upvote more than once

Cheers, and thanks again!

it's difficult to tell from a still frame, but it's working

Edited by Misantes
1

## Create an account

Register a new account

Followers 0

• ### Similar Content

• So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
A week back or so I got help to find this:
https://github.com/sp4cerat/Planet-LOD
In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
He gets the position using this row
vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));
Inside the draw function this happens:
draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
But this is used later on with:
vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

Full code can be seen here:
https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
Toastmastern

• I googled around but are unable to find source code or details of implementation.
What keywords should I search for this topic?
Things I would like to know:
A. How to ensure that partially covered pixels are rasterized?
Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
How to ensure proper synchronizations in GLSL?
GLSL seems to only allow int32 atomics on image.
C. Is there some simple ways to estimate coverage on-the-fly?
In case I am to draw 2D shapes onto an exisitng target:
1. A multi-pass whatever-buffer seems overkill.
2. Multisampling could cost a lot memory though all I need is better coverage.
Besides, I have to blit twice, if draw target is not multisampled.

• By mapra99
Hello

I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

Thanks!

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));

• 19
• 14
• 23
• 11
• 28