Need a little help on understanding backbuffer/surface relationship
Members - Reputation: 334
Posted 24 April 2012 - 09:47 PM
So as far as I can tell, you never actually render to the back buffer, until on present, when the device stretch's the source data to the back buffer.
So I am confused how this source data is held on to, and how you control what render targets data make it to the final back buffer.
I guess I am looking for more of a formal understanding. I have never really thought about this until now.
Moderators - Reputation: 13416
Members - Reputation: 122
Posted 25 April 2012 - 02:08 AM
Members - Reputation: 1638
Posted 25 April 2012 - 02:25 AM
"Present" doesn't fill the back buffer, it (simply said) swaps back buffer with front buffer, which makes the old back buffer visible on the screen.
Crossbones+ - Reputation: 2890
Posted 25 April 2012 - 05:29 AM
A typical scenario for some post process effects could be go in the following way:
- Render scene to a render target with the same size as the back buffer, as if you were rendering to the back buffer
- Set the back buffer as render target
- Set the render target where you rendered your scene as a texture
- Draw a full screen quad and in the pixel shader read the texture (where you rendered the scene), do some post process effect, write to back buffer
A typical scenario for deferred rendering/shading could be as following (heavily simplified with certain errors):
- in the beginning create several back buffer sized render target textures, one for diffuse, one for normals, one for shading parameters, one for per pixel depth (or use z-buffer if possible), and a light accumulation render target
- bind your render targets to RT0,RT1,RT2,(RT3)
- render your objects as earlier, but in the pixel shaders instead of calculating lighting output the required data (normals, diffuse texture, shading parameters, depth) to different render targets.
- unbind rendertargets
- set light accumulation buffer as render target, and set the previous render targets as textures to be read in the pixel shader
- for each light source draw a full screen quad. in the pixel shader reconstruct view/world position based on depth, read normals, read shading parameters, and calculate the lighting value which you output to the accumulation buffer additively.
- unbind render targets
- set backbuffer as render target
- set light accumulation buffer and diffuse texture as textures
- draw fullscreen quad. In the pixel shader read diffuse value and lighting value and calculate the final color with the value. Output final color.
I hope this gives you some ideas how to work with render targets.