Render Pass

Started by
5 comments, last by matt77hias 6 years, 8 months ago

While browsing some code snippets in this forum, I came across the concept of a render pass. I guess the idea is to make the rendering pipeline a bit more modular and flexible. Does such a render pass consist of one full pipeline invocation (example: render pass 1: filling the G-buffer, render pass 2: reading the G-buffer, etc.) or does a render pass consist of all related pipeline invocations (example: render pass 1: deferred rendering, render pass 2: sprites, etc.)? Is the setup in its most basic form:

RenderPass::Preprocess()

RenderPass::Process() /* iterate scene data: lights, models, sprites, etc. */

RenderPass::Postprocess()

Currently bad code smells become prominent in my code: my scene has one big method doing all the culling and rendering.

 

🧙

Advertisement

A render pass usually takes the approach of representing a single stage of the rendering pipeline. I recently built a pass based rendering pipeline based on the frame graph from dice and the producer system from asassins creed. Each "pipeline stage" represents a single rendering pass, I. Depthprepass , gbuffer, shadowmp, etc. these pipeline stages supply their inputs and outputs, prior to rendering I build the dependency list and perform resource transitions/transient resource creation at pipeline stage boundaries.

1 hour ago, AxeGuywithanAxe said:

to rendering I build the dependency list and perform resource transitions/transient resource creation

What do you mean with this?

🧙

I'll try to explain by example. In my engine I have the depth prepass and gbuffer stage. The depth prepass is responsible for generating the entire scene depth. The gbuffer pass requires the depth prepass generated depth texture. This is a dependency. Prior to running my graph, my engine looks through all of these types of dependencies and orders then. So in this instance it would be depth prepass, and then gbuffer pass. Because I am also preparing for dx12 and vulkan, prior to a rendering pass being rendered, my graph finds all of the non writable inputs , and transitions them to the "readable" state. I also use pooled render targets. So my passes can declare that they need a render target of a specific size, format, etc, and prior to calling the c++ code to render the pass, my graph allocates the render targets

Originally, a pass was a full and complete rendering pass. Games usually rendered an image, then in a second pass, might render more lights, then in another pass, add some bump, lightmapping/shadowing...

Nowadays a pass can be a first rendering from the camera view into some buffer(s), another one from the lights view (to store the depth textures), another one for the ambient occlusion, bloom effects... Except for the depth-map generation, most of the passes are generally done from the camera view.

When you render in deferred, the full Gbuffer is generally filled once in a single pass.

Doing a lookup in some already generated buffers, is generally not considered to be a render pass. This is generally done in some render pass in order to retrieve some information at some pixels.

Currently, I populate a buffer containing all the information (from the current scene) I need so far to render a single frame.


vector< const CameraNode * >           m_cameras;
vector< const ModelNode * >            m_opaque_models;
vector< const ModelNode * >            m_transparent_models;
vector< const DirectionalLightNode * > m_directional_lights;
vector< const OmniLightNode * >        m_omni_lights;
vector< const SpotLightNode * >        m_spot_lights;
vector< const SpriteNode * >           m_sprites;
RGBSpectrum                            m_ambient_light;
const SceneFog *                       m_fog;

I only have a forward pass (opaque -> transparent) at the moment (and a final pass for the sprites only). This pass takes the above buffer as input and iterates the content of the buffer itself. Due to the latter, I can avoid a virtual method call per node. The downside of every pass having to iterate the content of the buffer, is a giant code blob that looks kind of similar for each pass.

My camera node contains a settings structure which selects the render mode, the BRDF and some layers. The render modes includes: visualize shading normals/normal map shading normals/diffuse color/diffuse texture/reference texture etc. The layers include wireframe, bounding boxes, etc. My current implementation tries to avoid having a separate pass class for each render mode and layer (BRDFs only require a separate PS) by hacking (set a different PS, iterate lights or not, use a fixed material, etc.) the behavior of these extras in the forward pass. This approach limits the extensibility: I could use shaders with their own constant buffer layouts instead of reusing and hacking the existing constant buffer layout.

So to sum up, I wonder if you eventually need a 1 to 1 mapping between passes and render modes?

🧙

This topic is closed to new replies.

Advertisement