# OpenGL 2D icons in a 3D projection?

## Recommended Posts

Grumple    177
Hi,

I'm working on a project where I need to draw 2D icons in a 3D perspective projection. The idea is to position the icons correctly in the 3D scene, but to draw them with consistent pixel size regardless of distance from the camera. If you picture 'points of interest' in google maps, you will see what I'm after. Regardless of the map zoom level or camera 'distance from map', the points of interest icons maintain constant pixel size.

The icons themselves will just be standard textured billboards, always oriented to face the camera directly.

As far as I can tell, there are two basic approaches....one being to draw them as part of the perspective scene render, but calculate their height/width to cancel out distance from camera (ie make them bigger the farther from the camera they are using a reversal of the perspective projection), or to do a second render pass in an actual Ortho projection, where the widths and heights can be consistent. However, if my head is wrapped around the problem correctly, I wouldn't really save effort with an ortho projection as I would then have to do math to position things in a perspective projection manner within the viewport.

Would I be correct to say the logical method is to render the icons in perspective mode with the rest of the scene, but adjust the width/height of the billboards in a 'reverse perspective' calculation to compensate for distance from camera? Is there any other simple way to do this?

Thanks!

Edit: In hindsight this probably should have gone in the "Graphics Programming and Theory" forum instead of OpenGL.. Edited by Grumple

##### Share on other sites
FXACE    182
So, what you need to do is:
1. To project the vertex of your icon's position in 3d-space (you can use gluProject, for example)
2. Now you have got a position in window space. Go to orthogonal projection and draw it:
[code]
glOrtho(0,vp[2],0,vp[3],0,1); // NOTE: vp - is "int viewport[4]" which you used in gluProject
//double wpos[3] - output from gluProject (window position)
//double iconSize; - in pixels
glVertex3d(wpos[0] - 1 * iconSize, wpos[1] - 1 * iconSize, wpos[2]);
glVertex3d(wpos[0] + 1 * iconSize, wpos[1] - 1 * iconSize, wpos[2]);
glVertex3d(wpos[0] + 1 * iconSize, wpos[1] + 1 * iconSize, wpos[2]);
glVertex3d(wpos[0] - 1 * iconSize, wpos[1] + 1 * iconSize, wpos[2]);
glEnd();
// this would draw quadratic at 3D position with fixed size in pixels
[/code]
That's little example on how to do that.

Best wishes, FXACE. Edited by FXACE

##### Share on other sites
Grumple    177
Thanks a lot, FXACE....gluProject definitely looks useful for what I want.

Just as a follow up question, I assume window Z coord returned by gluProject would be depth into the screen from camera perspective? I still want my 2D icons to be depth tested and culled by any geometry that was rendered in perspective projection and is 'closer' to the camera. Would the depth testing still work for the billboards while rendering in Ortho using the winZ value from gluProject?

Edit:
Upon doing some gluProject reading, it seems the returned z coordinate is in 'normalized' depth buffer coordinates. Here is an example I've found describing my main problem quite accurately, with what looks like a good solution.
[url="http://stackoverflow.com/questions/8990735/how-to-use-opengl-orthographic-projection-with-the-depth-buffer/8991624#8991624"]http://stackoverflow.com/questions/8990735/how-to-use-opengl-orthographic-projection-with-the-depth-buffer/8991624#8991624[/url]

That thread has a formula for converting between the intial gluProject winZ and a z value matching that depth in the Ortho mode. However, if I follow the formula correctly, setting my intial ortho znear to 0, and zfar to 1, I can directly use the z value returned by gluProject as my billboard Z, just negated to fall down the -z axis (opengl).

Does that make sense? The only part that is still confusing me is whether or not the billboard corners can correctly have the same z value as the center point that was returned through gluProject? I think I might be imagining it wrong, but in perspective mode if I ran a ray from the camera to the center of the billboard, it would get a different distance than a ray from camera to a corner, so I would think the depth values would be different as well.

Does the perspective depth buffer somehow make the depths 'normalized' from a plane the camera is on perpendicular to its line of sight, or do the depths represent distance from the actual spot the camera sits? Edited by Grumple

##### Share on other sites
FXACE    182
This formula would be useful for you in near future but for this you don't really need that.
Let me show you complete example:
[code]
// Set up projection
glMatrixMode(GL_PROJECTION);
gluPerspective(...);
glMatrixMode(GL_MODELVIEW);
// place your view to look at
DrawScene();
double proj[16];
double mv[16];
int vp[4];
glGetDoublev(GL_MODELVIEW_MATRIX, mv);
glGetDoublev(GL_PROJECTION_MATRIX, proj);
glGetIntegerv(GL_VIEWPORT, vp);
double pos[3] = {0, 0, 0}; // position of your icon in 3D space
double wpos[3];

gluProject(pos[0], pos[1], pos[2], mv, proj, vp, &wpos[0], &wpos[1], &wpos[2]);
wpos[2] *= -1; // negate z value because OpenGL's forward axis is '-Z'
double iconSize = 50;

// go to orthogonal projection
glMatrixMode(GL_PROJECTION);
glOrtho(0, vp[2], 0, vp[3], 0,1);
glMatrixMode(GL_MODELVIEW);

glVertex3d(wpos[0] - 1 * iconSize, wpos[1] - 1 * iconSize, wpos[2]);
glVertex3d(wpos[0] + 1 * iconSize, wpos[1] - 1 * iconSize, wpos[2]);
glVertex3d(wpos[0] + 1 * iconSize, wpos[1] + 1 * iconSize, wpos[2]);
glVertex3d(wpos[0] - 1 * iconSize, wpos[1] + 1 * iconSize, wpos[2]);
glEnd();
[/code]
And finally you will get nice quad which would be drawn as you wanted (depth tested and culled).

[quote name='Grumple' timestamp='1337019764' post='4940156']
Does that make sense? The only part that is still confusing me is whether or not the billboard corners can correctly have the same z value as the center point that was returned through gluProject? I think I might be imagining it wrong, but in perspective mode if I ran a ray from the camera to the center of the billboard, it would get a different distance than a ray from camera to a corner, so I would think the depth values would be different as well.

Does the perspective depth buffer somehow make the depths 'normalized' from a plane the camera is on perpendicular to its line of sight, or do the depths represent distance from the actual spot the camera sits?
[/quote]
Nope, any vertex is clipped by planes (left, right, bottom, top, near, far). In orthogonal projection distance of center, corner of billboard to near (&far) plane are equivalent. OpenGL doesn't have a camera, it has a view port (where you move/transform all vertices into, by using of matrices, etc)
But with shaders you can construct a depth buffer as you wish...

Best wishes, FXACE. Edited by FXACE

##### Share on other sites
Grumple    177
Thanks for the detailed explanation! I was definitely mixing up the difference between the concept of a camera and the OpenGL viewport. Cheers!

## Create an account

Register a new account

• ### Similar Content

• Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).

• Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
Skype: Mangodoor408
• By tyhender
Hello, my name is Mark. I'm hobby programmer.
So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine).
And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
Sorry for late replies.
I mostly give more information when people PM me,but this post is REALLY short,even for me =D
So here's few more points:
Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me.

• By Zaphyk
I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?

• I'm trying to get some legacy OpenGL code to run with a shader pipeline,
The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
I've got a version 330 vertex shader to somewhat work:
#version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
Question:
What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?

Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.

• 11
• 14
• 26
• 16
• 19