Need a little help on understanding backbuffer/surface relationship

Started by
3 comments, last by kauna 11 years, 11 months ago
So I have been working with dx9 for a while now, but I am starting to attempt some more manipulation of render targets ect... to obtain a desired effect.

So as far as I can tell, you never actually render to the back buffer, until on present, when the device stretch's the source data to the back buffer.
So I am confused how this source data is held on to, and how you control what render targets data make it to the final back buffer.

I guess I am looking for more of a formal understanding. I have never really thought about this until now.

Thank you.
Advertisement
You can render directly to the back buffer...I'm not sure where you got the idea that you can't. Most simple D3D applications will just render directly to the backbuffer, and then present that to the display.
You can render to the back-buffer. It is the back-buffer be stretched to the front-buffer if the back-buffer and the front-buffer are not compatible. If they are compatible, the back-buffer becomes the front-buffer, and the ex-front-buffer is put into the swapchain.
You really got it wrong. You (can) render to the back buffer, each draw call directly changes it (when the GPU processes the call).
"Present" doesn't fill the back buffer, it (simply said) swaps back buffer with front buffer, which makes the old back buffer visible on the screen.
Rendering to a render target(s) doesn't really differ from rendering to the back buffer. Here's few example of using render targets.

A typical scenario for some post process effects could be go in the following way:

- Render scene to a render target with the same size as the back buffer, as if you were rendering to the back buffer
- Set the back buffer as render target
- Set the render target where you rendered your scene as a texture
- Draw a full screen quad and in the pixel shader read the texture (where you rendered the scene), do some post process effect, write to back buffer
- Present

A typical scenario for deferred rendering/shading could be as following (heavily simplified with certain errors):

- in the beginning create several back buffer sized render target textures, one for diffuse, one for normals, one for shading parameters, one for per pixel depth (or use z-buffer if possible), and a light accumulation render target

- bind your render targets to RT0,RT1,RT2,(RT3)
- render your objects as earlier, but in the pixel shaders instead of calculating lighting output the required data (normals, diffuse texture, shading parameters, depth) to different render targets.
- unbind rendertargets

- set light accumulation buffer as render target, and set the previous render targets as textures to be read in the pixel shader
- for each light source draw a full screen quad. in the pixel shader reconstruct view/world position based on depth, read normals, read shading parameters, and calculate the lighting value which you output to the accumulation buffer additively.

- unbind render targets
- set backbuffer as render target
- set light accumulation buffer and diffuse texture as textures

- draw fullscreen quad. In the pixel shader read diffuse value and lighting value and calculate the final color with the value. Output final color.

- present


I hope this gives you some ideas how to work with render targets.

Best regards!

This topic is closed to new replies.

Advertisement