Archived

This topic is now archived and is closed to further replies.

Seyedof

OpenGL Access to OpenGL Depth buffer

Recommended Posts

Hi Is this scenario possible? I have written a terrain engine which renders to a standard windows DIB surface, it has its own depth buffer (eg an array of floats). No i want to add some guys and monsters to my engine and render them via opengl , is it possible to tell the opengl to do its rendering on my DIB? i wanna first draw the land by my own rendering engine and then draw the other stuff like objects onto it, is it also possible to tell opengl to use my depth buffer and draw the stuff againt it ?? Thanks

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Easy awnser: no. OpenGL needs it''s own render context, it''s own (hardware depended) color and depth buffer format and so on.
But: you can get access to the OpenGL window, render your own stuff to it and mix it with OpenGL. The problem is, that you can''t get direct memory access, you need an offscreen buffer and copy forth and back to the OpenGL buffer using glReadPixels() and glDrawPixels(). You can also update the depth buffer this way. But beware: you should use a format that is internally supported by your hardware, otherwise glRead/DrawPixels() needs to do very costly conversions and your framerate will drop.
You should consider rendering everything, including your landscape with OpenGL.

-AH

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Krippy2k:
Interesting, but the problem with this code is the little line:

BitBlt(hDCFrontBuffer, 0, 0, winWidth, winHeight, hDC, 0, 0, SRCCOPY)

This is slow. Well, it depends on your detailed configuration (resolution, colordepth, speed of blitting operation...), but you will definitely loose performance. And you still haven''t got the depth buffer... I think this kind of DIB rendering is a bit kludgy, the words ''3D accelerated rendering'' and ''device-independent'' don''t fit very well together, at least with current hardware...

-AH

Share this post


Link to post
Share on other sites
Yeah, well he wanted a way to render to his DIB and that will do it just fine and will do it pretty fast

Copying his DIB back to the front buffer is another issue. That code was only meant to represent a way to render to the DIB and did not bog down on device copying algorithms. There are much faster means available than BitBlt.

Loading your own depth buffer is not real difficult and is pretty well documented.

But I agree that generally anything with the words Device and Independent in the definition of the acronym are destined to snaildome in the scope of a real 3D game. But perhaps he just wants to see if he can do it and not necessarily be concerned about the speed issues.

Better him than me. lol

Seeya
Krippy

Share this post


Link to post
Share on other sites
Hi
Thanks to all you guys, im used to write my engines in DIB, coz it is easier to start an engine and also easier to debug.
Ill convert them to DX or OpenGL after the development phase, and god damn it im a software rendering kid

Sometimes it is necessary to do everything via GDI specially in Non-Game applications...

OpenGL neither DX and none of the accelerators out there are capable to do voxels and you should do it via software, polygons really suck ( Delta Force3 is poly != Delta Force2 is voxel ).

BitBlt is not that slow, but StretchDIBits is real slow thing but if you choose a dib pixel format which is the same as your video mode depth blitting becomes much faster coz no color depth conversion is needed to blit your dib to the screen (done by GDI)

Well i got the answer to the first question, but for the second one :

I''ll clear the GLs depth buffer, draw my landscape on the DIB as well as updating the GLs depth buffer by z values from my landscape renderer, then i draw both objects and their depth buffer with GL, this way it will render objects against the depth of landscape.
ReadPixels seems to be slow, is there any way to get a pointer to depth buffer?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Hmm, well ReadPixels is slow, especially on some older 3D cards (voodoo3 like). DrawPixels is faster (although I wouldn''t classify it as lightning fast either...)
Did you tried that: render your GL objects to a standard (non-DIB) GL surface, and your voxel terrain on whatever kind of surface, both with their respective depth buffer. then copy your software rendered image (incl depth) to the GL buffer with DrawPixels. I don''t know, if that''s fast (don''t think so), but it''ll work, since the hardware will do the depth compositing.
I''m not aware of any other method to get the GL depth buffer, perhaps some exotic DIB format with a depth component ? Or some (very few) 3d boards support textures in RGBAZ format, where Z is a depth value. If you have a (professional) 3d board that supports this kind of texture, then you could do your compositing with that, it''s very fast.

OK, I know that software rendering is nice and very cool to code, but I think that voxels shouldn''t be used anymore nowadays. I presume you are using them for a landscape ? There are tons of extremly impressive polygonal terrain engines out there, with a quality and complexity that would be absolutely impossible to achieve with voxels. You should perhaps consider doing everything in OpenGL, so you wouldn''t have the compositing problem and you''ll get acceleration throughout your terrain.

But well, in the end, it''s still a matter of taste

-AH

Share this post


Link to post
Share on other sites

  • Announcements

  • Forum Statistics

    • Total Topics
      628362
    • Total Posts
      2982266
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now