Hi, In my application the user can click into the 3D viewer to mark things. However, I currently use a color picking to determine the clicked 3D position. This implicate that I only get the 3D position of the nearest vertex. But now I have a 3D mesh with big triangles so that the resolution of the mesh is coarse. So the clicked position could be far away from the real position... My question is now, how can I determine the "real" 3D position on the mesh? This is the projected 2D position onto the 3D mesh..?
Thanks!
Basically you need to find out where on the near plane of your frustum the user clicked with the mouse coordinates and create a picking ray from that point going in the direction of the camera.
If you're using OpenGL, calling gluUnProject() with the third parameter(winZ) first with 0.0, then again with 1.0, storing both values, gives you a line segment going from your near plane to your far plane, so constructing a pick ray is very easy. I'm pretty sure D3D has something similar you can do to get such a ray.
Now you have to test that ray against your mesh to find out which triangle it intersects and where.
If you're using OpenGL, calling gluUnProject() with the third parameter(winZ) first with 0.0, then again with 1.0, storing both values, gives you a line segment going from your near plane to your far plane, so constructing a pick ray is very easy. I'm pretty sure D3D has something similar you can do to get such a ray.
Now you have to test that ray against your mesh to find out which triangle it intersects and where.
You can do this by casting a ray from where the user clicked on the near clipping plane to where he clicked on the far clipping plane and checking where the ray intersects with your model. Setting up the ray in post-perspective space ist very easy if you know the resolution of the screen and the 2D-coordinates of the point on the screen the user clicked on. Using the inverse projection, camera and world matrices you can transform the ray to object space and check with your mesh. Make sure to send the ray from the near to the far plane, since you only want to check polygons really visible to the user.
//Edit: Damned, Edge Damodred was faster^^
//Edit: Damned, Edge Damodred was faster^^
Thanks for your help.
Now, I implemented the picking and unfortunately it doesn't work fine... I mostly used the code from http://nehe.gamedev.net/data/articles/article.asp?article=13 which is quit easy.
And now my problem :) I have a 3D object in my viewer. The user is able to rotate and translate the object in the world space. So I have to produce the inverse transformation with the point resulting from the picking. For this I build a transformation-matrix which contains the translation and rotation... From this matrix I determine the inverse matrix (A^-1 -> B). The translation (x,y,z) works fine. Means, if I only translate my object I always get the right position on the object. But it doesn't work with the rotation. I observed that small rotations are working fine.. The error is bigger when the rotation is bigger.
Now has anyone an idea? Thanks!
Now, I implemented the picking and unfortunately it doesn't work fine... I mostly used the code from http://nehe.gamedev.net/data/articles/article.asp?article=13 which is quit easy.
And now my problem :) I have a 3D object in my viewer. The user is able to rotate and translate the object in the world space. So I have to produce the inverse transformation with the point resulting from the picking. For this I build a transformation-matrix which contains the translation and rotation... From this matrix I determine the inverse matrix (A^-1 -> B). The translation (x,y,z) works fine. Means, if I only translate my object I always get the right position on the object. But it doesn't work with the rotation. I observed that small rotations are working fine.. The error is bigger when the rotation is bigger.
Now has anyone an idea? Thanks!
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement