This topic is now archived and is closed to further replies.

OpenGL Access to OpenGL Depth buffer

This topic is 6016 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Is this scenario possible? I have written a terrain engine which renders to a standard windows DIB surface, it has its own depth buffer (eg an array of floats). No i want to add some guys and monsters to my engine and render them via opengl , is it possible to tell the opengl to do its rendering on my DIB? i wanna first draw the land by my own rendering engine and then draw the other stuff like objects onto it, is it also possible to tell opengl to use my depth buffer and draw the stuff againt it ?? Thanks

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
Easy awnser: no. OpenGL needs it''s own render context, it''s own (hardware depended) color and depth buffer format and so on.
But: you can get access to the OpenGL window, render your own stuff to it and mix it with OpenGL. The problem is, that you can''t get direct memory access, you need an offscreen buffer and copy forth and back to the OpenGL buffer using glReadPixels() and glDrawPixels(). You can also update the depth buffer this way. But beware: you should use a format that is internally supported by your hardware, otherwise glRead/DrawPixels() needs to do very costly conversions and your framerate will drop.
You should consider rendering everything, including your landscape with OpenGL.


Share this post

Link to post
Share on other sites
Hmmmm. Check this out:


Share this post

Link to post
Share on other sites
Guest Anonymous Poster
Interesting, but the problem with this code is the little line:

BitBlt(hDCFrontBuffer, 0, 0, winWidth, winHeight, hDC, 0, 0, SRCCOPY)

This is slow. Well, it depends on your detailed configuration (resolution, colordepth, speed of blitting operation...), but you will definitely loose performance. And you still haven''t got the depth buffer... I think this kind of DIB rendering is a bit kludgy, the words ''3D accelerated rendering'' and ''device-independent'' don''t fit very well together, at least with current hardware...


Share this post

Link to post
Share on other sites
Yeah, well he wanted a way to render to his DIB and that will do it just fine and will do it pretty fast

Copying his DIB back to the front buffer is another issue. That code was only meant to represent a way to render to the DIB and did not bog down on device copying algorithms. There are much faster means available than BitBlt.

Loading your own depth buffer is not real difficult and is pretty well documented.

But I agree that generally anything with the words Device and Independent in the definition of the acronym are destined to snaildome in the scope of a real 3D game. But perhaps he just wants to see if he can do it and not necessarily be concerned about the speed issues.

Better him than me. lol


Share this post

Link to post
Share on other sites
Thanks to all you guys, im used to write my engines in DIB, coz it is easier to start an engine and also easier to debug.
Ill convert them to DX or OpenGL after the development phase, and god damn it im a software rendering kid

Sometimes it is necessary to do everything via GDI specially in Non-Game applications...

OpenGL neither DX and none of the accelerators out there are capable to do voxels and you should do it via software, polygons really suck ( Delta Force3 is poly != Delta Force2 is voxel ).

BitBlt is not that slow, but StretchDIBits is real slow thing but if you choose a dib pixel format which is the same as your video mode depth blitting becomes much faster coz no color depth conversion is needed to blit your dib to the screen (done by GDI)

Well i got the answer to the first question, but for the second one :

I''ll clear the GLs depth buffer, draw my landscape on the DIB as well as updating the GLs depth buffer by z values from my landscape renderer, then i draw both objects and their depth buffer with GL, this way it will render objects against the depth of landscape.
ReadPixels seems to be slow, is there any way to get a pointer to depth buffer?

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
Hmm, well ReadPixels is slow, especially on some older 3D cards (voodoo3 like). DrawPixels is faster (although I wouldn''t classify it as lightning fast either...)
Did you tried that: render your GL objects to a standard (non-DIB) GL surface, and your voxel terrain on whatever kind of surface, both with their respective depth buffer. then copy your software rendered image (incl depth) to the GL buffer with DrawPixels. I don''t know, if that''s fast (don''t think so), but it''ll work, since the hardware will do the depth compositing.
I''m not aware of any other method to get the GL depth buffer, perhaps some exotic DIB format with a depth component ? Or some (very few) 3d boards support textures in RGBAZ format, where Z is a depth value. If you have a (professional) 3d board that supports this kind of texture, then you could do your compositing with that, it''s very fast.

OK, I know that software rendering is nice and very cool to code, but I think that voxels shouldn''t be used anymore nowadays. I presume you are using them for a landscape ? There are tons of extremly impressive polygonal terrain engines out there, with a quality and complexity that would be absolutely impossible to achieve with voxels. You should perhaps consider doing everything in OpenGL, so you wouldn''t have the compositing problem and you''ll get acceleration throughout your terrain.

But well, in the end, it''s still a matter of taste


Share this post

Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
      #define NUM_LIGHTS 2
      struct Light
          vec3 position;
          vec3 diffuse;
          float attenuation;
      uniform Light Lights[NUM_LIGHTS];
    • By pr033r
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article ( inspirate from another code (here: but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch):
      Exe file (if you want to look) and models folder (for those who will download the sources):
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ...
      EDIT: Depth texture attachment:
  • Popular Now