• entries
21
28
• views
34322

# Designing a high-level render-pipeline Part 2: Views & passes

1333 views

Last entry:

https://www.gamedev.net/blog/1930/entry-2262313-designing-a-high-level-render-pipeline-part-1-the-previous-state/

# Thoughs about how to change things:

So despite not working on 3d-rendering for a long time, over the time I've been able to not only see whats wrong with the old rendering system (or the lack of the same), but also kind of had a plan already in the back of my mind on what to change. I just needed to think about it and put it in words as well as a rough overview of the new system. This is what I came up with:

# Requirements for the new system:

At first I though about what I wanted from the new system. Here are some points I already mentioned, as well as some new:

- Being able to setup multiple scene-views easily. I just want to be able say "There is a scene-view widget, you have your own view of the scene, go render it and show it to me".

- Not having to deal with resource management. What I mean by this is, not having to worry about creating render-targets/zbuffers myself, instead that system should allocate them all by themself. Kind of indicated by the point above, but that point I'm making is, I really do not want to be bothered by this in any way - specially talking about stuff like ping-ponging rendertargets for effects that read and write from the same texture at the same time.

- Less setup-overhead. The whole thing with the datafiles was fine at first, but at least after I integrated the group-system, it became a nightmare. I want to be able to create my render passes, tell them approximately where they belong, and be done with it.

- Easier debugging. Sometimes, that deferred render-execution caused a lot of issues when debugging things, since everything was executed from the same place and thus it was hard to say where things originally came from. I like the idea of not having a "Draw()" call that actually draws, but at least if I draw my whole gui, after that is done it should start the rendering process.

So the system I initially came up with to fullfill all those requirements actually looks like this:

# View:

As a new top-level system, there is a view.

A view has a specific size, and an output-rendertarget of that size. It internally can have a zbuffer, and multiple other rendertargets as needed.
A view internally owns a unique render-queue.
A view also has a camera and an according cbuffer, though those vary based upon 2d/3d. A view also has a pointer to a scene, that it should show.
A view can be resized, which will cause all its internal rendertargets and zbuffers to resize as well (did I mention resizing of the game render-part was a nightmare with the old system as well?)
A view can be told to Render(blend-factor), which will clear all the rendertargets (that was awkward in the old system as well), trigger the customizable rendering-process (we'll get to that in a minute), and then sort and execute its render-queue.

There is a lot of code associated with all that, but just to give you an idea how you would implemented the 2d/3d-specific stuff, here is the actualy 2d-view implementation:void RenderView2D::OnRenderView(double alpha){ const auto cameras = GetCameras(); const auto& vScreenSize = cameras.first->GetSize(); const float vScreen[4] = { (float)vScreenSize.x, (float)vScreenSize.y, 1.0f / vScreenSize.x, 1.0f / vScreenSize.y }; // update camera const auto rCamera = cameras.first->GetCameraRect(); const auto rOldCamera = cameras.second->GetCameraRect(); math::Rectf rInterpolatedScreen; if(rCamera != rOldCamera) rInterpolatedScreen = rCamera * alpha + rOldCamera * (1.0 - alpha); else rInterpolatedScreen = rCamera; // set 2d camera constansts auto updateBlock = gfx::getCbufferUpdateBlock(*this); updateBlock.SetConstant(0, (const float*)&rInterpolatedScreen, 1); updateBlock.SetConstant(1, (const float*)&vScreen, 1);}void RenderView2D::OnUpdate(double dt){ // aquire camera from entity world ActiveCameraQuery query; if(GetWorld()->GetMessages().Query(query) && query.entity.IsValid()) { auto pCamera = query.entity->GetComponent(); AE_ASSERT(pCamera); SetCamera(&pCamera->camera); } else { static const gfx::Camera2D camera(math::Vector2i(640, 480), 1.0f, math::Vector2f(0.0f, 0.0f)); SetCamera(&camera); } BaseRenderView::OnUpdate(dt);}
Aaand... thats pretty much it, as far as logic you have to concern yourself. A 3d-implementation would look quite similar to this.

Note that views can also be used in different ways - for example, there is a separate view for rendering the UI, which doesn't use a camera like this.

# Passes:

So now we have our view/camera-setup, but what do we actually show? Thats where another object comes into play, the RenderPass, or simply Pass.

A pass is always owned by exactly one view, and a view is composed of multiple passes.
A pass can specifiy which rendertargets it wants to write to, read from, as well as what zbuffers to use.
A pass has its own cbuffer, which it can fill with pass-specific data.
A pass can render one or many primitives, depending on the implementation (there are helper-classes like FullscreenPass, which will only render an effect over the whole rendertarget; other passes render all the entities, ...).
A pass has its own stage, which allows it to insert primitives to the views renderqueue.

So the pass is like a composite to the view, that determines what is rendered, and how it is shown.

As an example, here is a pass that applies a smooth fade-out per postprocessing:void ScreenFadeRenderPass::Update(double dt){ const float fadeValue = m_pScreenSystem->GetFadeValue(); SetShaderConstant(0, &fadeValue, 1);}render::FullScreenPassDeclaration ae::base2d::ScreenFadeRenderPass::GenerateFullscreenDeclaration(void) const{ return { L"ScreenFade", L"Base2D\\Effects\\ScreenFade", { L"scene" }, { L"sceneProcessed", L"", gfx::TextureFormats::X32, false }, 1, false };}
Thats pretty much it! As mentioned, this uses the FullscreenEffect-wrapper, so it has little stuff to do. So a view will have instances of different passes, and when renderer, it will first tell all those passes to render, before executing its queue.

The more interesting part is the declaration. Every pass needs to give some information as to what its doing:

- Its unique name
- An effect that it is using (this is fullscreen-pass-specific, see the use of FullScreenPassDeclaration, instead of RenderPassDeclaration)
- An array of render-targets it wants to read from
- An array of (optional) rendertargets it wants to create
- An array of render-targets it wants to write to (in case the pass creates some targets, this is used)
- The size of the cbuffer
- Whether or not it is using a zbuffer

So as you can see, unfortunately I didn't solve the problem of having to create/reference manual rendertargets yet. Also I still have to do ping-ponging myself (the pass I showed wants to read from scene and technically write back to it, so it has to create another render-target to do so).
Just to focus on that for a second before that recap, that is a big deal. Not only does it mean that each pass would have to know whether the pass before it actually did write to another texture, it also makes removing certain passes even for testing nearly impossible.

# Recap & outview:

The view/pass-system already was a huge step up. Instead of having render-code inside entity systems (I know thats pretty cringe-worthy, I just really didn't have a better idea at the time), it will now reside inside passes, which can dynamically be composed to form a render-view.

As I mentioned though there are a few problems left:

- Creating render-targets and actually assigning them to passes has to be done manually.
- Ping-ponging rendertargets is still a huge problem
- Determining the order of those passes hasn't much improved. I started by adding different layers (world, postprocess, ... ) that systems could assign themselves to, but other than that it was "first come, first served" in terms of registration order.

So what did I do to combat this? I can't even tell you how I came up with this anymore, but I had the idea for implementing a visual language akin to the visual scripting I already implemented for this. I'll show this in an separate article, though - prepare for some pictures, after all this text :)

Next article:

https://www.gamedev.net/blog/1930/entry-2262315-designing-a-high-level-render-pipeline-part-3-a-visual-interface/

There are no comments to display.

## Create an account

Register a new account