Jump to content
  • Advertisement
Sign in to follow this  
mrbig

OpenGL Real-Time Rendered Objects in Pre-Rendered Scenes?

This topic is 4145 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! As the title says, I want to use OpenGL to render objects in real-time, and have them correctly interact with a pre-rendered scene, for now only visually. By that I mean, to at least give the resulting hybrid scene a correct sense of depth, that is, to have the real-time rendered objects drawn behind or in front of the pre-rendered objects correctly. I think a simple way of achieving this effect would be to render the pre-rendered scene (with OpenGL, of course) and keep its Z-buffer, then, when drawing the hybrid scene, to initialize the Z-buffer to that of the pre-rendered scene, and only then to draw the real-time objects. (The "camera settings" for both scenes would have to be the same, of course.) There's just one problem - I have no idea how to do that using OpenGL. That is, I don't really know what extensions I'm going to need or how to use them. How do I send the Z-buffer content from the GPU to the computer RAM so I could save it in a file? (Now that I think of it, I don't know how to do that for the color buffer either!) How do I initialize the Z-buffer of an OpenGL scene to whatever I want? Any help will be appreciated! Thanks.

Share this post


Link to post
Share on other sites
Advertisement
Sounds like you want nailboards, but I have no idea how you actually implement that in OpenGL.

Alternatively, you could just create a simplified version of the geometry and render that to the zbuffer.

Share this post


Link to post
Share on other sites
Look up glDepthMask, GL_DEPTH_TEST, glColorMask.

These will help you do what you want! :)

Basically,

Disable colorwrites, enable depthwrites
Render 3d "collision" mesh
Enable colorwrites, disable depthwrites
render 2d background
enable depthwrites
render characters etc.

Cheers!

Share this post


Link to post
Share on other sites
What about frame buffer objects?
Is there a way to copy one renderbuffer into another?
If both renderbuffers are in the GPU RAM, it should be possible to do that quickly... or am I wrong?
Maybe I could copy the z-buffer of the pre-rendered scene into a renderbuffer using 'glWritePixels', then each frame copy that framebuffer into another framebuffer, which could then be used as the depth buffer for the hybrid scene?

Share this post


Link to post
Share on other sites
No need for FBOs; the functionality required has been in GL from the dawn of time :)

glDrawPixels and glReadPixels both accept GL_DEPTH_COMPONENT as their format parameter, instructing them to read to and write from the depth buffer, respectively.

Share this post


Link to post
Share on other sites
Yes, but using 'glDrawPixels' would require the application to send the z-buffer to the GPU every frame, which should be slow in high resolutions.
I thought that copying data which is already in the GPU RAM should be a lot faster, if it's possible.
Of course using frame buffer objects means losing compatibility with older hardware, but I doubt my application could run with a reasonable frame-rate on such hardware with all the 'glDrawPixels' stuff.

Share this post


Link to post
Share on other sites

That's true, plus the GL 2D pathway pixel transfer functions tend to be slow in today's consumer-level cards, but the read part shouldn't be a problem as it's only done once. To speed up writing, you can just draw a textured screen-sized quad using a simple pixel shader that writes texel values into the depth buffer.

Share this post


Link to post
Share on other sites
I'm not sure if I'm getting you right...
You mean I should make a texture from the pre-rendered z-buffer, access it from inside a fragment shader and use it to modify the depth buffer values?
I guess this means that I would have to encode the z-buffer to make it fit in the ordinary 32-bit texture format. It shouldn't be hard, but it sounds somewhat inconvenient... Isn't there another way? (Preferably one that avoids using shaders...)

Share this post


Link to post
Share on other sites
No need to modify anything; depth components in textures have been permitted since OpenGL 1.4. See the docs for the format and internalFormat parameters. I don't see why you wouldn't want to use shaders; they are after all more widely supported than FBOs. There are a couple recent extensions that permit quite free copying of data between different buffers (for example, straight from a FBO to the depth buffer), but these are even less widely supported than FBOs.

EDIT: Fixed the link

[Edited by - Sharlin on July 13, 2007 11:49:01 AM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!