glRenderMode(GL_SELECT) issue

Started by
7 comments, last by Xiachunyi 18 years, 7 months ago
I'm using glRenderMode(GL_SELECT) in order to select objects in my program. However, I quickly notice a massive performance hit using this method when I get a polygon count anything above something trivial. Is this a poor method? Am I doing something wrong? Here's a snapshot of my code:
// On a mouse click...
glSelectBuffer(...);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
   glRenderMode(GL_SELECT);
   glLoadIdentity();
   gluPickMatrix(...);
   gluPerspective(...);
   glInitNames();
   glPushName(0);

   // This part is taking a long time...
   RenderPickingScene(...);

   // determine what was clicked on
   hits = glRenderMode(GL_RENDER);
   ProcessesPickingHits(hits);

   glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);

void RenderPickingScene(...)
{
   glPushMatrix();
      glRotatef(ViewpointRotationX, -1.0f, 0.0f, 0.0f);
      glRotatef(ViewpointRotationY, 0.0f, -1.0f, 0.0f);
      glRotatef(ViewpointRotationZ, 0.0f, 0.0f, -1.0f);

      glTranslatef(-ViewpointPositionX, -ViewpointPositionY, -ViewpointPositionZ);

      glLoadName(...);

      // Draw the object(s)
      glPushMatrix();
         glTranslatef(ObjectPositionX, ObjectPositionY, ObjectPositionZ);

         glRotatef(ObjectRotationZ, 0.0f, 0.0f, 1.0f);
         glRotatef(ObjectRotationY, 0.0f, 1.0f, 0.0f);
         glRotatef(ObjectRotationX, 1.0f, 0.0f, 0.0f);

         glBindBufferARB(GL_ARRAY_BUFFER, VertexName);
         glVertexPointer(3, GL_FLOAT, 0, 0);

         glBindBufferARB(GL_ARRAY_BUFFER, ColorName);
         glColorPointer(4, GL_UNSIGNED_BYTE, 0, 0);

         glBindBufferARB(GL_ARRAY_BUFFER, NormalName);
         glNormalPointer(GL_FLOAT, 0, 0);

         glBindBufferARB(GL_ARRAY_BUFFER, TexCoordName);
         glTexCoordPointer(2, GL_FLOAT, 0, 0);

         // The object is just made up of quads
         glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, VBOName);
         glDrawRangeElements(GL_QUADS, 0, IndexCurrentSize, IndexCurrentSize, GL_UNSIGNED_INT, NULL);
          ^--- THIS FUNCTION TAKES NEARLY 1 SECOND

      glPopMatrix();

   glPopMatrix();
}

The call to glDrawRangeElements() seems to take nearly 1 second to complete. The object I'm drawing is made up of 600 quads (a cube with faces that are 10x10). As long as I don't invoke the GL_SELECT render mode and draw the cube, I don't get this slow down. Any ideas? I'm using a Radeon 9800 Pro with the latest drivers. Last time I tried this, I didn't see the problem on my laptop which has an NVidia video card. I'm ready to try just about any suggestions. [Edited by - Mantear on August 30, 2005 8:05:46 AM]
Advertisement
I came across some information suggesting that using GL_SELECT mode might be dropping me back to software rendering for part of it. However, this information was a couple years old. Is there a way to test whether or not that is happening? Other than that, I can't find any information as to why it takes so long to render anything in selection mode.
not to denounce this method or anything, since I've never tried it myself, but it does not seem logical to have object picking be a GPU task. Obviously it is popular and it takes more work to implement it on the CPU, but the goal is to run your graphics on the GPU as effeciently as possible, and parallel the CPU for extraneous tasks. Sphere / Ray intersections are very fast on the CPU, especially when combined with spacial partitioning, and from there you can test per poly, if you even want to go to that level.
I want to be able to get down to selecting individual polygons and points, so I can't just rely on intersecting with a sphere/bounding box. (I may eventually do that to determine which object was selected, and then refine it and determine which polygon/point was selected.) I currently only have a single object (a 600-quad cube). Surely it can't be taking this long to perform a built-in function of OpenGL.

I've googled around, but I've found very little information regarding this.
Quote:I want to be able to get down to selecting individual polygons and points, so I can't just rely on intersecting with a sphere/bounding box.

there are also ray->triangle + distance from ray->point intersection tests

another method is drawing the polygons each in a diffferent color and doing a read pixels back of the color under the cursor
It looks like I may have to switch to the "pick by color" method if I can't figure this out. However, I just tested it out on a laptop with a GeForce4 4200 Go video card, and the slow down does not occur. It must be an ATI driver issue. The FPS on my desktop with the Radeon 9800 Pro is ~900, but with a single click (one render pass of picking), it dives to barely over 100. My latop averages around 150, and if I click as fast as I can, I never drop below 140. (say I get ~5 clicks a second, results in an extra 5 frames to be rendered + processing of the clicks; accounts for the 10 FPS drop).

The method I currently have functional should work. It works on an NVidia card. Why isn't it working on my ATI card? Is there any way to ask someone at ATI about this?

It really bugs me that this should work but isn't. Even if I do switch to another method, I want to know why this didn't work.
Try adding, in between your call to loading and ending the object's selection ID, a dummy drawing routine.
glBegin(GL_LINES);							glEnd();


This is because some cards seem to dramatically and strangely decline in rendering without it. Probably right after the loading function for the ID.
Xiachunyi,

I tried what you suggested, and voila, MUCH better performance. Thank you!

Why in the world is that the case? Is it a driver bug, or a hardware issue? Are there specific models where this is a known problem?

EDIT: While doing what you suggested greatly improved the performance, it is still sub-par. I increased the number of quads being drawn and ran into the same problem again. Calling glBegin/glEnd reduces the delay, but it does not truly work around. Could this simply be a driver issue?

[Edited by - Mantear on August 30, 2005 8:29:43 PM]
From what I have read, Game Tutorials before they turned to the dark side, it seems to be a problem with the old Voodoo cards so I am not sure myself.

Probably the way information is being fed back to the computer from the graphics cards. Both my nVidia MX 440se, 5600, and 6600 go do not falter if the dummy code is not put in line so maybe it is ATi specific. Further testing will have to be done, of course, but that "hack" seems to curb the problem for now.

Possibly someone with greater knowledge of the Opengl selection routine and interface to the hardware will have the answer.

This topic is closed to new replies.

Advertisement