Jump to content
  • Advertisement
Sign in to follow this  
Achilleos

glReadPixels seem to interfere with vsync

This topic is 3635 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i am currently doing some lens flare effect. For this i want to check if the light is covered by some geometry. Because i have a lot of objects in the scene i do not want to raycast through bounding boxes. Instead, a simple way is to find out the screen position for the light source and check if its depth is greater than the fragment depth in the depth buffer. Here is the code:
/**
   * Projects a single point in 3d to the screen and returns true if the point is occluded
   * @param pos The given 3d-point
   * @param screenPos Result of the screen projection 
   * @return true if the point is occluded
   */
  public boolean getScreenPosForSpacePos(Vector3f pos, Vector2f screenPos)
  {
    queryBuffer.rewind();
    core.glu.gluProject(pos.getX(), pos.getY(), pos.getZ(),
        Camera.mvmatrix, Camera.projmatrix, Camera.viewportmatrix,
        queryBuffer);
    
    screenPos.setValues((float)queryBuffer.get(0), (float)queryBuffer.get(1));
    double fragDepth = queryBuffer.get(2);

    FloatBuffer depth = FloatBuffer.allocate(1);
    core.gl.glReadPixels((int)screenPos.getX(), (int)screenPos.getY(), 1, 1, GL.GL_DEPTH_COMPONENT, GL.GL_FLOAT, depth);
    depth.rewind();
    
    return fragDepth > depth.get();
  }

The problem occurs when vsync is enabled. The scene is nearly empty (only some objects to check the occlusion) and needs about 0.5 ms per frame. With vsync disabled the framerate is at 1000 and it does not matter if the light is visible, occluded or outside the image area. With vsync enabled the time needed for drawing raises to 8.3 ms. Buffer-Swapping is done manually after rendering the whole scene. I did get the problem with vsync at this point. Does anyone has any ideas with this? Greatings in advance and a happy new year!

Share this post


Link to post
Share on other sites
Advertisement
so enabling vsync drops framerate... i dont see the problem. Thats what it's 'supposed' to do..
1000/8.3 = 120.xx = 120 Hz = your monitor's refresh rate.


edit: dont see the link to glReadPixels, though beware, it's notoriously slow (actually it hardly wont get any slower than that)

Share this post


Link to post
Share on other sites
Its not the framerate drop thats the problem here. Its the time needed for drawing.
The code brick above is executed within the rendering process. Meaning between starting the rendering with clearing the buffer until calling glFinish() and buffer swapping. The time needed within raises from near nothing to really impacting the overall render time.
The framerate drop to 60 Hz is not what i mean.

I currently implemented the usage of ARB_occlusion_query to fade the lens flares at the image and objects borders. The same problem exists here. Within the extension description this is written:

"Often, query object results will be returned asynchronously with
respect to the host processor's operation. As a result, sometimes,
if a result is queried, the host must wait until the result is back."

I guess its the same "host has to wait" problem that results the rendertime raise with glReadPixels. But i do not get why the time to wait is so much shorter for the render process without vsync enabled. Because all the operations have to operate on the back buffer and the processing within this buffer should not be related to buffer swaping which is delayed for vsync.

Share this post


Link to post
Share on other sites
indeed, glReadPixels needs to 'wait' untill the buffer is 'readable'.
Im not sure about the details of the operations of vsync exactly, but simply stated, it lowers the fps to match the monitor Hz, implying directly you can only access glReadPixels equal times.

im sorry, but still dont see a problem. whats the case when disabling glReadPixels ? (it gets faster, untill hitting the vsync upperlimit again, logically)

Share this post


Link to post
Share on other sites
The cpu does an active waiting. Raising the cpu usage from 20% to 80%. Resulting in some fan noise and unnessecary cpu load...

Share this post


Link to post
Share on other sites
Why not do this in a shader instead, which will reduce CPU load significantly? In general, reading data back from the GPU is going to be very slow, no matter how you cut it.

Share this post


Link to post
Share on other sites
Because i want this effect even without shader usage. For older hardware.
Also the shader has to know about the depth of the fragments, meaning you have to provide a depth image (which, of cause, you will need for a lot of post production effects).

Share this post


Link to post
Share on other sites
Also, you shouldn't need to call glFinish().

There is a call to glFinish() as part of SwapBuffers() if you are doing Double Buffering.

The whole point of VSynce is that it matches the framerate to the refresh rate of your monitor. So if have a monitor at 60Hz then you will have a framerate of 60FPS.

The framerate and render time are not mutually exclusive, since time is constant. Of course your rendering time will increase as your framerate decreases (this is simple mathematics).

I don't think you should be concerned by it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!