Jump to content
  • Advertisement
Sign in to follow this  
Omaha

OpenGL Surfaces, Textures and RenderTargets

This topic is 4972 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I admit it, I've never done anything really... practical with Direct3D. I'm an OpenGL man, but I want to fan out. I was glancing through this Direct3D book I have earlier today while I was... well, you know when there's really nothing else to do but sit and read... and I read about surfaces, textures and render targets. I played with DirectDraw back in the day so I know that a surface is basically just a buffer for storing an image of some sort, and here you have variable formats. Textures... as is my understanding you can acquire a surface that represents the texture, do what you need to it, and then you can update the texture when you're done with the surface. And then there's render targets, which represent things you can render to. I imagine then that surfaces are in system memory, while as textures and render targets as available are in video memory. Is it at all efficient (or possible, I finished before getting to that part) to set a render target, do some rendering, then get that rendered image as a surface, then upload it as a texture? I'm sure some sort of functionality that does that is available, but would that be the process to use, or should I read on and find something much more efficient?

Share this post


Link to post
Share on other sites
Advertisement
Render targets are commonly used in that way. Like this:

Set RenderTarget
Render objects onto RenderTarget
Set RenderTarget as a texture input for another shader
Render to a different target

This proves to be efficient use for performing imaging post-processing effects. The scene is rendered to a RenderTarget, then that RenderTarget is rendered as a 2D quad and some type of effect is performed on it (ie blurring, edge detection, color adjusting, ect...).

Share this post


Link to post
Share on other sites
Quote:
Original post by circlesoft
This proves to be efficient use for performing imaging post-processing effects. The scene is rendered to a RenderTarget, then that RenderTarget is rendered as a 2D quad and some type of effect is performed on it (ie blurring, edge detection, color adjusting, ect...).


So, you only render the scene itself once, and the 2nd render pass you only render this quad thats directly in front of the camera, right?

Share this post


Link to post
Share on other sites
Quote:
Original post by MikeyO
Quote:
Original post by circlesoft
This proves to be efficient use for performing imaging post-processing effects. The scene is rendered to a RenderTarget, then that RenderTarget is rendered as a 2D quad and some type of effect is performed on it (ie blurring, edge detection, color adjusting, ect...).


So, you only render the scene itself once, and the 2nd render pass you only render this quad thats directly in front of the camera, right?


Yup.

Share this post


Link to post
Share on other sites
To clarify the whole surface/texture/rendertarget thing:

Surface: an image buffer. Nothing fancy. OFten in video memory, not system (but could be)

Texture: Made up of 1 or more Surfaces. Generally when rendering, you use a texture while low-level twiddling is done to surfaces.

Rendertarget: A surface or a texture that can be directly rendered to. Often created as a texture so it can easily be used as a source for rendering.


Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!