How to render the scene to a texture (for the purposes of doing a Post Process shader Pass)?

Started by
2 comments, last by Key_C0de 1 year, 5 months ago

Hello, I've been struggling with something for more than 2 weeks now, where I want to perform a post processing operation on my entire scene (from now on let's say I want to apply a Blur filter).

From what I understand, what I have to do is:

  1. Pass n: render the scene to an offscreen texture (Implementation: a. create a new texture, b. create a RTV on it, c. create an SRV on a new slot (say 4) to read from that texture where the scene has been rendered onto in the next pass)
  2. Pass n+1: using the srv from Pass n read from the “offscreen scene texture” (that I bound on slot 4) and use it to perform a Blur operation. For this purpose, bind a texture sampler (also on slot 4), bind a pixel shader that does the blur and of course bind the SRV. Now since I want to render this post process operation as an overlay of the entire scene, I have to raster the scene to a quad that stretches over the entire window and then render that texture on it. So I bind a VS with fullscreen geometry and appropriate VB, IB, Input Layout and Primitive Topology bindings.

I am trying to formulate and maintain a regular Rendering system, where each Pass requires a set of bindings and after those Bindings are bound you execute the draw call. And each Pass is executed orderly before the next one.

However, I can't get it to work so far.

The output I get on the screen is not the blurred scene, but a single! blurred texture. I debug the shader using visual studio graphics debugger and this is the texture that's supplied as input input (on slot 4) in the Pass n+1 pixel shader:

(The strange unrealistic color of the tiles is purposefully done like that.)

This texture is bound on a wall on my scene, and I think it somehow gets caught up on there (maybe the last texture bound before Pass n) and in the process that texture alone is the one rendered on to the scene (but blurred).

Therefore, this leads me to think that the part where I'm failing is creating the offscreen texture. The post process operation seems to be applied properly whatever that may be. What do you think?

Also, this is more specific to my renderer design, I don't bind a pixel shader for Pass n (I just let the scene be drawn whatever that is up to that point). This may sound a bit strange as isn't a PS required to be bound every time we render something and expect a visual result?

So basically I would like to kindly request 1. some feedback on what I'm doing so far, my approach and 2. how to properly render the scene to a texture.

Help?

None

Advertisement

Blur is ‘downsampling’ operation, which means between successive render pass, the input to the current render pass has to be a scaled version of the previous pass. If all the input for each pass is the same size, then you'll end up with what you are seeing.

Ok I made it.

My own rendering architecture confused me.

I had to make some adjustments to accommodate offscreen rendering.

What you have to do:

  • all rendering is made offscreen, exactly the same way you did before (same bindings etc.) with the only difference being that now you use a render target that doesn't point to the back buffer texture, but another texture (call it “offscreen” texture)
  • when you're ready to present to screen (before you actually Context→Present()) you have to bind your back buffer as output to the pipeline (so now all further rendering will be drawn on that) AND THEN bind your offscreen texture as input to the pipeline (so we'll read from it). - Yes order matters.
  • your last shader will be used to draw to the back buffer's texture . You can have it do a full-screen operation, eg. perform a final post-process (blur or w/e) pass or most commonly just a passthrough shader (aka copy from input texture - aka offscreen texture - to output texture -aka backbuffer texture -
  • finally drawIndexed()

None

This topic is closed to new replies.

Advertisement