Surfaces, Textures and RenderTargets

Started by
3 comments, last by DrunkenHyena 19 years, 3 months ago
I admit it, I've never done anything really... practical with Direct3D. I'm an OpenGL man, but I want to fan out. I was glancing through this Direct3D book I have earlier today while I was... well, you know when there's really nothing else to do but sit and read... and I read about surfaces, textures and render targets. I played with DirectDraw back in the day so I know that a surface is basically just a buffer for storing an image of some sort, and here you have variable formats. Textures... as is my understanding you can acquire a surface that represents the texture, do what you need to it, and then you can update the texture when you're done with the surface. And then there's render targets, which represent things you can render to. I imagine then that surfaces are in system memory, while as textures and render targets as available are in video memory. Is it at all efficient (or possible, I finished before getting to that part) to set a render target, do some rendering, then get that rendered image as a surface, then upload it as a texture? I'm sure some sort of functionality that does that is available, but would that be the process to use, or should I read on and find something much more efficient?
Advertisement
Render targets are commonly used in that way. Like this:

Set RenderTarget
Render objects onto RenderTarget
Set RenderTarget as a texture input for another shader
Render to a different target

This proves to be efficient use for performing imaging post-processing effects. The scene is rendered to a RenderTarget, then that RenderTarget is rendered as a 2D quad and some type of effect is performed on it (ie blurring, edge detection, color adjusting, ect...).
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
Quote:Original post by circlesoft
This proves to be efficient use for performing imaging post-processing effects. The scene is rendered to a RenderTarget, then that RenderTarget is rendered as a 2D quad and some type of effect is performed on it (ie blurring, edge detection, color adjusting, ect...).


So, you only render the scene itself once, and the 2nd render pass you only render this quad thats directly in front of the camera, right?
Quote:Original post by MikeyO
Quote:Original post by circlesoft
This proves to be efficient use for performing imaging post-processing effects. The scene is rendered to a RenderTarget, then that RenderTarget is rendered as a 2D quad and some type of effect is performed on it (ie blurring, edge detection, color adjusting, ect...).


So, you only render the scene itself once, and the 2nd render pass you only render this quad thats directly in front of the camera, right?


Yup.
Stay Casual,KenDrunken Hyena
To clarify the whole surface/texture/rendertarget thing:

Surface: an image buffer. Nothing fancy. OFten in video memory, not system (but could be)

Texture: Made up of 1 or more Surfaces. Generally when rendering, you use a texture while low-level twiddling is done to surfaces.

Rendertarget: A surface or a texture that can be directly rendered to. Often created as a texture so it can easily be used as a source for rendering.


Stay Casual,KenDrunken Hyena

This topic is closed to new replies.

Advertisement