# OpenGL gluUnproject

This topic is 4540 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I'm building an Earth 3d simulation using Qt and OpenGl. I have a problem with the function gluUnproject because I want convert screen cursor coordinates to world coordinates:in my application the Depth Range isn't [0,1] but it is variable (there is a zoom function). I red that gluUnproject works well only if winz is in [0,1] range. Then I want test if the point clicked by cursor is on the sphere. How can I do this? Thanks

##### Share on other sites
You could do the following:

Do gluUnproject twice - once with winz being 0 and the other time with 1.

i.e.
   float nx,ny,nz,fx,fy,fz;   gluUnproject(mousex,mousey,0,modelview,projection,viewport,&nx,&ny,&nz); //at nearplane   gluUnproject(mousex,mousey,1,modelview,projection,viewport,&fx,&fy,&fz); //at farplane

Now having nx,ny,nz (World position at near plane) and fx,fy,fz (world position at farplane), you can create a ray from near plane position to far plane position.
Then you do a simple Ray/Triangle collision detection with the triangles of your sphere.

This should be the most precise solution, as far as I know

Edit:
If you really only need to know IF the mouse is on the sphere, and not WHERE it is on the sphere, you could alternatively do a Ray/Box collision test (with a bounding box around your sphere). This saves some performance

##### Share on other sites
Thanks Hydrael!
I have the following doubt : Can I do what you said also if my near and far plane have winz depending on the distance from the sphere?

Also, I use DirectX and I'm new in OpenGl, so I don't know how do a Ray/Triangle collision detection with the triangles of my sphere.
To build my sphere I have used gluSphere.
Can you help me?

If it can be of help here there is the code which I want porting from DirectX to OpenGl:

D3DVIEWPORT9 Viewport;
pD3DDevice->GetViewport( &Viewport );

Viewport.Width = m_pos.m_windowWidth;
Viewport.Height = m_pos.m_windowHeight;

D3DXVECTOR3 p1( screenX, screenY, Viewport.MinZ );
D3DXVec3Unproject( &p1, &p1, &Viewport, &m_mProj, &m_mView, &m_mWorld );

D3DXVECTOR3 p2( screenX, screenY, Viewport.MaxZ );
D3DXVec3Unproject( &p2, &p2, &Viewport, &m_mProj, &m_mView, &m_mWorld );

float a = (p2.x - p1.x) * (p2.x - p1.x) + (p2.y - p1.y) * (p2.y - p1.y) + (p2.z - p1.z) * (p2.z - p1.z);
float b = 2*((p2.x - p1.x)*(p1.x) + (p2.y - p1.y)*(p1.y) + (p2.z - p1.z)*(p1.z));
float c = (p1.x*p1.x + p1.y*p1.y + p1.z*p1.z - m_fWorldRadius * m_fWorldRadius);

float discriminant = (float)(b*b - 4 * a * c);

if(discriminant <= 0)
return false;

float t1 = ((-1.0f) * b - sqrt(b*b - 4 * a * c)) / (2*a);

v.x = p1.x + t1 *(p2.x - p1.x);
v.y = p1.y + t1 *(p2.y - p1.y);
v.z = p1.z + t1 *(p2.z - p1.z);

return true;

Thanks!!!!

##### Share on other sites
Quote:
 Original post by loryThanks Hydrael!I have the following doubt : Can I do what you said also if my near and far plane have winz depending on the distance from the sphere?

I don't know if I got you right, but I think you misunderstood one thing.
The first gluUnproject (with winz being 0) determines the world position of your mousecursor, if the depth position of the mouse would be right in front of your nose (near plane).
The other one (winz=1) does the same, except for the fact, that it assumes your mousecursor's depth position to be all the way at the back of your scene (far plane).

What I want to say: winz does not represent a third axis for 2D->3D conversion (meaning, it does stand for "z") - it tells where the 2 Dimensional point (mousepos) is, within your depth range, which is view dependant. And this depth range reaches from 0.0 (near plane) to 1.0 (far plane).

Regarding the collision detection:
I haven't worked with gluSphere yet, that's why I can't help you here, sorry :/
But there are plenty of sources on the net, which explain Ray/Triangle, Ray/Box or maybe Ray/Sphere collision detection routines

I don't have access to my sourcecodes right now, but when I get home from work, I could post a Ray/Sphere function

Greets

Chris

Edit again:
Looking at your DX code...you only need to know IF the mousecursor is on the sphere, not WHERE, right?
If so, the function I will post when I get home will be exactly what you need ;)

##### Share on other sites
Thanks Hydrael!!!

There is a thing that I don't understand: winz =0 and winz=1 are the near and far plane but 0 and 1 are the distances from the viewer?
I have a near and far plane different from 0 and 1 (if you see my code Viewport.MinZ and Viewport.MaxZ depend on a zoom factor and their values are always >> 1 ), can I pass their values as winz gluUnproject parameter?

Thanks for the Ray/Sphere function...I'm waiting for it!

Greets,
Lory

##### Share on other sites
This is the Ray/Sphere collision function.

RayOrigin=Array of 3 floats (x,y,z), representing the origin of the ray (nx,ny,nz in the previously posted code snippet)
RayDir=Array of normalized 3 floats, representing ray direction (calculated with nx,ny,nz,fx,fy,fz)
Center=Array of 3 floats (x,y,z), representing the center coordinate of your sphere

returns TRUE, if the ray hits the sphere (meaning: mousecursor is on the sphere)

BOOL intersectRaySphere(float *RayOrigin, float *RayDir, float *Center, float Radius) {	float dx,dy,dz,x1,y1,z1,x0,y0,z0,cx,cy,cz,R;	x0=RayOrigin[0];	y0=RayOrigin[1];	z0=RayOrigin[2];	x1=x0+FarPlane*RayDir[0];	y1=y0+FarPlane*RayDir[1];	z1=z0+FarPlane*RayDir[2];	cx=Center[0];	cy=Center[1];	cz=Center[2];	R=Radius;	dx = x1 - x0;        dy = y1 - y0;        dz = z1 -  z0;	float a = dx*dx + dy*dy + dz*dz;        float b = 2*dx*(x0-cx) +  2*dy*(y0-cy) +  2*dz*(z0-cz);        float c = cx*cx + cy*cy + cz*cz + x0*x0 + y0*y0 + z0*z0 + -2*(cx*x0 + cy*y0 + cz*z0) - R*R;	if(b*b-(4*a*c)>=0)		return TRUE;	return FALSE;}

##### Share on other sites
Hydrael is correct, but you don't need to unproject at both the near and far planes. You can use the camera position as the ray origin and then use that and the unprojected mouse position (at any depth) to calculate the ray direction.
MousePos = gluUnproject with mouse coordinates at any depthRayOrigin = CameraPosRayDir = normalize(MousePos - CameraPos)
This way you only need to call gluUnproject once. It's not a very "heavy" function however so you won't notice a speed difference.

##### Share on other sites
As "at any depth" you mean that I can use :
to calculate wz and then I can pass it to gluUnproject?
And what do you mean with "normalize(MousePos - CameraPos)"?

Thanks very much!!!

##### Share on other sites
That's one possibility - but glReadPixels is very unprecise.

Normalizing a vector means: bringing all three components (x,y,z) into a 0-1 region

void Normalize(float *v) {	float d;		d = (float)sqrt((v[0]*v[0]) + (v[1]*v[1]) + (v[2]*v[2]));	if(d==0)		return;	v[0] = v[0] / d;	v[1] = v[1] / d;	v[2] = v[2] / d;}

##### Share on other sites
Quote:
 Original post by loryAs "at any depth" you mean that I can use :glReadPixels(wx,wy,1,1,GL_DEPTH_COMPONENT,GL_FLOAT,&wz);to calculate wz and then I can pass it to gluUnproject?And what do you mean with "normalize(MousePos - CameraPos)"?Thanks very much!!!
By "any depth" I mean you can use any valid depth value you want, it won't make a difference to the ray direction since the points at the mouse position at any depth are in the same ray. So there's no point in reading the depth value from the framebuffer, just pass any valid depth value as winz. I say "valid depth value" only because I'm not sure if gluUnproject expects it inside the depth range ([0,1] most likely) and I don't feel like checking right now [grin]. I think it technically could work with ANY value (as long as it isn't a depth value that would be behind the camera, otherwise the ray direction would be the opposite of what you want), but you might as well just use 0 (corresponding to the near plane) to be safe.

(MousePos - CameraPos) will give you a vector from the camera's world-space position to the mouse's world-space position.
Quote:
 Original post by HydraelNormalizing a vector means: bringing all three components (x,y,z) into a 0-1 region
Your normalization function is correct but your description is slightly off. I notice you're from Germany so it's probably just from English not being your native language.

A "normalized vector" is a vector that has unit length (length = 1). You get this by dividing each component in the vector by that vector's magnitude (magnitude == length). This is exactly what Hydrael's function does. For a more mathematical explanation with links to other terms see here.

• ### Similar Content

• By McGrane
Hey
My laptop recently decided to die, so Ive been transferring my project to my work laptop just to get it up to date, and commit it. I was banging my head against the wall all day, as my textures where not displaying in my program- I was getting no errors and no indication of why it was occurring so I have been just trying to figure it out- I know the image loading was working ok, as im using image data elsewhere, I was pretty confident that the code was fine also, as ive never had an issue with displaying textures before, so I thought it might be the drivers on this laptop, (my old one was just using the built in IntelHD, while this laptop has a NVIDIA graphics card) but all seems to be up to date.
#version 330 core layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; uniform mat4 Projection; uniform mat4 Model; out vec3 Color; out vec3 Normal; out vec2 TexCoord; void main() { gl_Position = Projection * Model * vec4( position, 1.0 ); Color = color; Normal = normal; TexCoord = vec2( texCoord.x, texCoord.y); } Fragment Shader
#version 330 core in vec3 Color; in vec3 Normal; in vec2 TexCoord; uniform sampler2D textureData; void main() { vec4 textureColor = texture( textureData, TexCoord ); vec4 finalColor = textureColor * vec4( Color, 1.0f); gl_FragColor = finalColor; } Calling Code
glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glUniform1i(glGetUniformLocation(shaderID, "textureData"), textureID); Now this is the part i dont understand, I worked through my program, until I got to the above 'Calling Code'. This just displays a black texture.. my original issue. Out of desperation, I just tried changing the name in glGetUniformLocation from "textureData" to "textureData_invalid" to see if my error checks would through up something, but in actual fact, it is now displaying the texture as expected. Can anyone fathom a guess as too why this is occurring.. im assuming the random text is just picking up the correct location by c++ witchcraft, but why is the original one not getting picked up correctly and/or not working as expected
I realize more code is probably needed to see how it all hangs together.. but it seems to come down to this as the issue
• By QQemka
Hello. So far i got decently looking 3d scene. I also managed to render a truetype font, on my way to implementing gui (windows, buttons and textboxes). There are several issues i am facing, would love to hear your feedback.
1) I render text using atlas with VBO containing x/y/u/v of every digit in the atlas (calculated basing on x/y/z/width/height/xoffset/yoffset/xadvance data in binary .fnt format file, screenshot 1). I generated a Comic Sans MS with 32 size and times new roman with size 12 (screenshot 2 and 3). The first issue is the font looks horrible when rescaling. I guess it is because i am using fixed -1 to 1 screen space coords. This is where ortho matrix should be used, right?
2) Rendering GUI. Situation is similar to above. I guess the widgets should NOT scale when scaling window, am i right? So what am i looking for is saying "this should be always in the middle, 200x200 size no matter the display window xy", and "this should stick to the bottom left corner". Is ortho matrix the cure for all such problems?
3) The game is 3D but i have to go 2D to render static gui elements over the scene - and i want to do it properly! At the moment i am using matrix 3x3 for 2d transformations and vec3 for all kinds of coordinates. In shaders tho i technically still IS 3D. I have to set all 4 x y z w of the gl_Position while it would be much much more conventient to... just do the maths in 2d space. Can i achieve it somehow?
4) Text again. I am kind of confused what is the reason of artifacts in Times New Roman font displaying (screenshot 1). I render from left to right, letter after letter. You can clearly see that letters on the right (so the ones rendered after ones on the left are covered by the previous one). I was toying around with blending options but no luck. I do not support kerning at the moment but that's definitely not the cause of error. The display of the small font looks dirty aliased too. I am knd of confused how to interpret the integer data and how should be scaled/adapted to the screen view. Is it just store the data as constant size and again - use ortho matrix?
https://i.imgur.com/4rd1VC3.png
https://i.imgur.com/uHrSXfe.png
https://i.imgur.com/xRTffPn.png
• By plz717
Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂)
I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance!

• By nOoNEE
hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource
• By owenjr
Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.

Examples:
- Procedural multi-legged walking animation
- Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
• By Lewa
So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
http://filmicworlds.com/blog/filmic-tonemapping-operators/
http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
This is the tonemapping code:
vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
This is with the uncharted tonemapping:
Which makes the image a lot darker.
The shader code looks like this:
void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
To check this i plotted the tonemapping curve:
You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)

My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
• By nOoNEE
in the OpenGL Rendering Pipeline section there is a picture like this: link
but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu

• By Inbar_xz
I'm using the OPENGL with eclipse+JOGL.
My goal is to create movement of the camera and the player.
I create main class, which create some box in 3D and hold
an object of PlayerAxis.
I create PlayerAxis class which hold the axis of the player.
If we want to move the camera, then in the main class I call to
the func "cameraMove"(from PlayerAxis) and it update the player axis.
That's work good.
The problem start if I move the camera on 2 axis,
for example if I move with the camera right(that's on the y axis)
and then down(on the x axis) -
in some point the move front is not to the front anymore..
In order to move to the front, I do
player.playerMoving(0, 0, 1);
And I learn that in order to keep the front move,
I need to convert (0, 0, 1) to the player axis, and then add this.
I think I dont do the convert right..
I will be glad for help!

Here is part of my PlayerAxis class:

//player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1﻿; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); }﻿ x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMat﻿rix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; ﻿coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }
and in the main class i have this:

public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }
finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
• By Lewa
So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
(here is the full shader source code if someone wants to take a look at it)
Now, i suspect that the normals are the culprit.
vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
//"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?

• Hi,
I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

• 11
• 20
• 12
• 10
• 38
• ### Forum Statistics

• Total Topics
631401
• Total Posts
2999865
×