void MouseRayPicker(){
//method 1, not sure if working.
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(cursorPos.x, cursorPos.y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(cursorPos.x, cursorPos.y, z_cursor, modelview, projection, viewport, &x, &y, &z);
fprintf(stdout, "Cursor target: %f, %f, %f .\n", x, y, z);
}
I'm using the above for generating a ray for picking objects in world space.
At this time, i'm seeing the numbers generate ok when the cursor is precisely in the center of the screen, but if the cursor position changes as all the numbers go into large values that dont make a great deal of sense. (600,429,-560 for example)
Can someone suggest where i'm going wrong?
RayPicking
first of all you need to swap y (height - y) is your y, second thing is that you need to change screen coords to window coords
if the code below is not working than you have additionally problems like (not enabled depth writing while drawing, or you have wrong perspective calculations)
so:
TPoint mouse;
GetCursorPos(&mouse); //this is windows function i believe that VC++ uses TTagPoint or something similar
ScreenToClient(hwnd, &mouse); <- hwnd is opengl window handle //this is too windows function
our3dpointinspace = Reproduce_Mouse_coordinates(mouse.x,mouse.y);
t3dpoint __fastcall ModelGroupEditor::Reproduce_Mouse_coordinates(int x, int y)
{
t3dpoint res;
glGetDoublev( GL_PROJECTION_MATRIX, mprojection );
glGetDoublev( GL_MODELVIEW_MATRIX, mmodelview );
glGetIntegerv( GL_VIEWPORT, mviewport );
int wy = mviewport[3] -y;
double nx,ny,nz;
double winX = double(x);
double winY = double(mviewport[3]) - double(y);
double winZ;
float * pdata = new float;
float pk;
glReadBuffer(GL_DEPTH_BUFFER_BIT);
glReadPixels(x,wy,1,1,GL_DEPTH_COMPONENT,GL_FLOAT,pdata);
pk = (*pdata);
delete pdata;
winZ = double(pk);
gluUnProject(winX, winY, winZ, mmodelview, mprojection, mviewport, &nx, &ny, &nz);
res.x = float(nx);
res.y = float(ny);
res.z = float(nz);
return res;
}
For feedback...
I attempted the code that was supplied above and it seemed to be just as bad... perhaps i'm not tracking depth (if i recall this may be the case due to some code that i'm using).
I took the point about the inverted Y, and i've made modifications to my code to use gluUnProject to use z at 0 and z at 1 to generate the line, and stop using the pixel reader.
Thanks for the pointer.
Hi,
And how did you go with finding a solution?
Here is one example (of many) of what it should look like:
look at this http://www.opengl.org/wiki/GluProject_and_gluUnProject_code if you build your own matrix for this you can use it w/o caring about that this is deprecated.
However my code works on different PCs so i assume you are just doing it wrong, note that ScreenToClient method should be used when window has borders.
and ray picking for a complex scene will dramatically drop fps (if you do this). anyway didint find any info about glreadpixels glgetdobulev (if its supported in newer opengl versions)
However my code works on different PCs so i assume you are just doing it wrong, note that ScreenToClient method should be used when window has borders.[/quote] I'm sure the code works, it just didn't work in my case, and as you've alluded to it could be as a result of my use of different variables or slightly different set up... to be fair, i copy pasted your code directly below mine (adjusted some variables) and had the same results as my code within about 2%, so more than likely something to do with the way i'm calling rendering functions.
Personally, I've used color picking. Also as a note, I'm using deferred shading, so it might not work in your case. Basically I render both an albedo and a normal with RGBA. Not, normals don't need the extra alpha channel, so I can store data there. Each object in the scene has a unique identifier that it uploads to the GBuffer shader and renders it as the alpha in the normal channel. Then after everything is rendered, I use glReadPixels to check the pixel from the center of the screen and then take the alpha value and look for an object with that identifier on the CPU side of the scene. Basically, besides the read pixels and the searching bit, this is free because I was already rendering 1 to the alpha channel, and I just replaced it with a more meaningful value. Its also a great solution because it works pixel perfect which is really great. Theres almost no math involved, and Im sure you could adapt it for a forward rendering solution, maybe render to a frame buffer with 2 color attachments and take the ID from the second one and then draw a fullscreen quad with the first one to the screen. I know that thats getting a bit into deferred rendering, but I think it would be worth it in the end just because its so simple and fairly cheap.
Good luck!
@bluespud, Where in your update/render loop do you put the color picking code? just before the swapping of buffers or after? for example, the code i use to determine the end point of the ray has to be put at the start of all the transforms otherwise the endpoint gets transformed incorrectly. I would imagine the readpixel would need to go after the buffer is swapped (ie rendered to screen) to get the right pixel information?
You can actually call it inside the render loop so it would look like this:
-render the scene
-glReadPixels
-swap the buffers
Because Im doing this with a Gbuffer, I have the frame buffer still bound, which Im pretty sure is required.