Sign in to follow this  

Need a little help on understanding backbuffer/surface relationship

This topic is 2063 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I have been working with dx9 for a while now, but I am starting to attempt some more manipulation of render targets ect... to obtain a desired effect.

So as far as I can tell, you never actually render to the back buffer, until on present, when the device stretch's the source data to the back buffer.
So I am confused how this source data is held on to, and how you control what render targets data make it to the final back buffer.

I guess I am looking for more of a formal understanding. I have never really thought about this until now.

Thank you.

Share this post


Link to post
Share on other sites
You can render to the back-buffer. It is the back-buffer be stretched to the front-buffer if the back-buffer and the front-buffer are not compatible. If they are compatible, the back-buffer becomes the front-buffer, and the ex-front-buffer is put into the swapchain.

Share this post


Link to post
Share on other sites
You really got it wrong. You (can) render to the back buffer, each draw call directly changes it (when the GPU processes the call).
"Present" doesn't fill the back buffer, it (simply said) swaps back buffer with front buffer, which makes the old back buffer visible on the screen.

Share this post


Link to post
Share on other sites
Rendering to a render target(s) doesn't really differ from rendering to the back buffer. Here's few example of using render targets.

A typical scenario for some post process effects could be go in the following way:

- Render scene to a render target with the same size as the back buffer, as if you were rendering to the back buffer
- Set the back buffer as render target
- Set the render target where you rendered your scene as a texture
- Draw a full screen quad and in the pixel shader read the texture (where you rendered the scene), do some post process effect, write to back buffer
- Present

A typical scenario for deferred rendering/shading could be as following (heavily simplified with certain errors):

- in the beginning create several back buffer sized render target textures, one for diffuse, one for normals, one for shading parameters, one for per pixel depth (or use z-buffer if possible), and a light accumulation render target

- bind your render targets to RT0,RT1,RT2,(RT3)
- render your objects as earlier, but in the pixel shaders instead of calculating lighting output the required data (normals, diffuse texture, shading parameters, depth) to different render targets.
- unbind rendertargets

- set light accumulation buffer as render target, and set the previous render targets as textures to be read in the pixel shader
- for each light source draw a full screen quad. in the pixel shader reconstruct view/world position based on depth, read normals, read shading parameters, and calculate the lighting value which you output to the accumulation buffer additively.

- unbind render targets
- set backbuffer as render target
- set light accumulation buffer and diffuse texture as textures

- draw fullscreen quad. In the pixel shader read diffuse value and lighting value and calculate the final color with the value. Output final color.

- present


I hope this gives you some ideas how to work with render targets.

Best regards!

Share this post


Link to post
Share on other sites

This topic is 2063 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this