Access to OpenGL Depth buffer

Started by
5 comments, last by Seyedof 22 years, 9 months ago
Hi Is this scenario possible? I have written a terrain engine which renders to a standard windows DIB surface, it has its own depth buffer (eg an array of floats). No i want to add some guys and monsters to my engine and render them via opengl , is it possible to tell the opengl to do its rendering on my DIB? i wanna first draw the land by my own rendering engine and then draw the other stuff like objects onto it, is it also possible to tell opengl to use my depth buffer and draw the stuff againt it ?? Thanks
--MFC (The Matrix Foundation Crew)
Advertisement
Easy awnser: no. OpenGL needs it''s own render context, it''s own (hardware depended) color and depth buffer format and so on.
But: you can get access to the OpenGL window, render your own stuff to it and mix it with OpenGL. The problem is, that you can''t get direct memory access, you need an offscreen buffer and copy forth and back to the OpenGL buffer using glReadPixels() and glDrawPixels(). You can also update the depth buffer this way. But beware: you should use a format that is internally supported by your hardware, otherwise glRead/DrawPixels() needs to do very costly conversions and your framerate will drop.
You should consider rendering everything, including your landscape with OpenGL.

-AH
Hmmmm. Check this out:

http://trant.sgi.com/opengl/examples/win32_tutorial/simpledib.c


Seeya
Krippy
Krippy2k:
Interesting, but the problem with this code is the little line:

BitBlt(hDCFrontBuffer, 0, 0, winWidth, winHeight, hDC, 0, 0, SRCCOPY)

This is slow. Well, it depends on your detailed configuration (resolution, colordepth, speed of blitting operation...), but you will definitely loose performance. And you still haven''t got the depth buffer... I think this kind of DIB rendering is a bit kludgy, the words ''3D accelerated rendering'' and ''device-independent'' don''t fit very well together, at least with current hardware...

-AH
Yeah, well he wanted a way to render to his DIB and that will do it just fine and will do it pretty fast

Copying his DIB back to the front buffer is another issue. That code was only meant to represent a way to render to the DIB and did not bog down on device copying algorithms. There are much faster means available than BitBlt.

Loading your own depth buffer is not real difficult and is pretty well documented.

But I agree that generally anything with the words Device and Independent in the definition of the acronym are destined to snaildome in the scope of a real 3D game. But perhaps he just wants to see if he can do it and not necessarily be concerned about the speed issues.

Better him than me. lol

Seeya
Krippy
Hi
Thanks to all you guys, im used to write my engines in DIB, coz it is easier to start an engine and also easier to debug.
Ill convert them to DX or OpenGL after the development phase, and god damn it im a software rendering kid

Sometimes it is necessary to do everything via GDI specially in Non-Game applications...

OpenGL neither DX and none of the accelerators out there are capable to do voxels and you should do it via software, polygons really suck ( Delta Force3 is poly != Delta Force2 is voxel ).

BitBlt is not that slow, but StretchDIBits is real slow thing but if you choose a dib pixel format which is the same as your video mode depth blitting becomes much faster coz no color depth conversion is needed to blit your dib to the screen (done by GDI)

Well i got the answer to the first question, but for the second one :

I''ll clear the GLs depth buffer, draw my landscape on the DIB as well as updating the GLs depth buffer by z values from my landscape renderer, then i draw both objects and their depth buffer with GL, this way it will render objects against the depth of landscape.
ReadPixels seems to be slow, is there any way to get a pointer to depth buffer?
--MFC (The Matrix Foundation Crew)
Hmm, well ReadPixels is slow, especially on some older 3D cards (voodoo3 like). DrawPixels is faster (although I wouldn''t classify it as lightning fast either...)
Did you tried that: render your GL objects to a standard (non-DIB) GL surface, and your voxel terrain on whatever kind of surface, both with their respective depth buffer. then copy your software rendered image (incl depth) to the GL buffer with DrawPixels. I don''t know, if that''s fast (don''t think so), but it''ll work, since the hardware will do the depth compositing.
I''m not aware of any other method to get the GL depth buffer, perhaps some exotic DIB format with a depth component ? Or some (very few) 3d boards support textures in RGBAZ format, where Z is a depth value. If you have a (professional) 3d board that supports this kind of texture, then you could do your compositing with that, it''s very fast.

OK, I know that software rendering is nice and very cool to code, but I think that voxels shouldn''t be used anymore nowadays. I presume you are using them for a landscape ? There are tons of extremly impressive polygonal terrain engines out there, with a quality and complexity that would be absolutely impossible to achieve with voxels. You should perhaps consider doing everything in OpenGL, so you wouldn''t have the compositing problem and you''ll get acceleration throughout your terrain.

But well, in the end, it''s still a matter of taste

-AH

This topic is closed to new replies.

Advertisement