• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
RobMaddison

Post processing in the rendering pipeline

7 posts in this topic

I'm in tidy up and refactor mode today. I got my font engine working nicely which means I can display timings of various parts of the engine. One thing I wanted to do was to see how fast my post processing engine is and so I switched on my simple Gaussian blur post processing effect.

My rendering pipeline thus far is kinda simple but hopefully fairly scalable. My engine uses entities and components and so when I come to render, I go through each entity (no culling implemented yet) and ask the entity to give me back a vector of 'render tokens' - this list gets added to my overall list of render tokens.

The reason an entity could return more than one render token is that the model could have one or more materials and so it is one render token per mesh subset (i.e. material) - this works quite nicely and means when I sort my render tokens, all my like materials are grouped together for minimal state changes.

Once I have this list of render tokens, I sort them (currently only based on shader type/material) and pass them to the 'RenderFrame' method of my rendering manager. The rendering manager then renders each token and once that is complete, the post processing manager steps in and renders the post processing effect chain.

This works great so far but it's a bit limiting. Because my text is rendered by just creating an entity per line of text with a mesh component (a simple screen-aligned quad) and a render component, it is just treated like any other render token. This is where the limits come in. My Gaussian blur post processing effect blurred everything including my text.

So I guess I should introduce more bits into my sorting value (I've been basing this on the "Order your graphics draw calls around!" article on realtimecollisiondetection.net site) and by this I mean push the HUD render tokens to be after the post processing effects are rendered.

This leads me to an interesting design decision... Do I keep my post processing engine as it is, i.e., a self-contained manager that maintains its own full screen quad meshes, shaders, etc, and execute it in the rendering pipeline after some flag that says 'we're done rendering some stuff, do some post processing now, and now render some more stuff'...

Or do I change things drastically and make it more generic by making a post process effect just another entity. This/these could then be slotted into the render pipeline with a sort id bit that puts them in the correct place in the order. I can then have HUD entities with priorities that are after the post processing effect entities.

I think I prefer the second option as it feels more scalable - the only issue I can see is that some post processing effects need inputs from others, e.g. bloom, so I'd need to build that into the rendering engine somehow. Perhaps I should make the back buffer surface available as an input to each render token if it needs it...

Any thoughts or suggestions?

Thanks in advance
0

Share this post


Link to post
Share on other sites

I'd personally go for the more drastic change.  There are going to be some PP effects that you're going to want to render after text/HUD/GUI/etc (I personally implement brightness/gamma using PP these days as it plays nicer with windowed modes and doesn't screw up the user's desktop post-crash) so enabling that seems a good idea.

 

Regarding your last question, what I've done is implement a simple "render target manager" that handles switching of render targets and storing out the current one so that it can be grabbed for input to anything that needs it.  It's not as robust or elegant as I'd like, but it suffices for my own purposes.

0

Share this post


Link to post
Share on other sites

I handle rendering and post-processing by different "scenes". In this context, a HUD is its own scene (this gives way to 3D HUDs without worrying about intersection with world objects) and the game world is its own scene. 

 

This is not the same as grouping different meshes by different shaders, because a rendered scene output is a flat image that can get layered on top of other renders. Each scene has its own "render path" to take, allowing different processes, like forward rendering with deferred rendering. The render path also has optional PP steps.

 

The way it works is, first, I clear the screen. Then, for each scene, render it through its render path. Each "step" in the path is a self contained rendering class with common properties being it can take meshes, cameras or render targets as inputs. Classes may be reused for different scenes. PP steps usually only take render target inputs, and 3D rendering steps take cameras and meshes. But they all output render targets. At the last step, output the final render target, and continue with the next scene. Each render path shares a pool of render target textures.

 

These renders are layered back to front so I would render the HUD last, with its own rendering technique. As the layers have a transparent background, scenes with some PP effects can blend infront of others. For instance, if you wanted to make some kind of "glow" shader for the HUD with blur and light bloom, the glowing edges will be visible on top of the render of the game world.

 

I think it would be overkill to have many separate layers of stuff being rendered (and could hurt performance with many screen-sized quads) but I see it being useful for separating a few scenes as being considered tied to different game components, since the HUD and game world operate under different logic and input rules.

Edited by CC Ricers
0

Share this post


Link to post
Share on other sites
Thank you both. I really like the idea of separate renderers for each screen layer and, in fact, I was going to add 2 screen layer bits to my sorting mask but, as you've alluded to CC, I wouldn't be able to add post processing effects to each layer separately (I'm not sure I'd really need this for my game).

I started playing with the idea of a post processing effect being just another entity and because of the beauty of entity/component systems, got it up and running without too much hassle. I have decided to go with a separate render call for each layer as it makes things much easier and a bit less transparent. I will be keeping the post process effects as entities though as it works pretty nicely, they'll just be sorted to the end of the render tokens.

Thanks again.

Incidentally, if I have a shader technique with more than one pass, do I always have to feed the second one with the rendered results of the first? I'd have thought the underlying libraries would know that by default and somehow do it automatically. I use 2 passes with my Gaussian blur shader, horizontal and vertical, and my application code has to feed the results of the horizontal one into the vertical one. I guess the underlying system can't always know what we're trying to achieve....
0

Share this post


Link to post
Share on other sites
Thanks L Spiro, that's almost exactly what I already have - it's always good to know you're doing something right... ;)
0

Share this post


Link to post
Share on other sites

Thank you both. I really like the idea of separate renderers for each screen layer and, in fact, I was going to add 2 screen layer bits to my sorting mask but, as you've alluded to CC, I wouldn't be able to add post processing effects to each layer separately (I'm not sure I'd really need this for my game).

 

That's not exactly correct. You would be able to add different effects to each layer, or different combinations of effects. Each layer just will not have an effect on each others' rendering pipeline.

 

Take this example where [SR] is a scene renderer and each [PP] is a post-process effect. They chain together (as a list or a directed tree) to perform the steps in order and produce the final output.

 

layer 1

[SR1]-->[PP1]-->[PP2]--> buffer

layer 2

[SR1]-->[SR2]--> buffer

 

In layer 1, a scene is rendered in just one step, and two post-process effects are added to that render. In layer 2, one scene is rendered, then a second scene (or it could be the same scene using a different shader), and SR2 depends on a render target created by SR1.

0

Share this post


Link to post
Share on other sites

That's not exactly correct. You would be able to add different effects to each layer, or different combinations of effects. Each layer just will not have an effect on each others' rendering pipeline.

 

Sorry, that's not quite what I meant but thanks for taking the time to clarify.  What I meant was that if I were to naively add screen layers into my render sort mask, their bits would be near the high bit and the post process bit would be further down, meaning when you sort it would sort like this:

 

screen layer 1

render token 1

render token 2

render token 3 (bright pass)

render token 4 (blur - h & v)

render token 5 (composite)

screen layer 2

render token 6

render token 7

render token 8 (some nice PP effect on the HUD)

etc

 

So if this is just one long list, the second post processing set (render token 8) would post process over the first screen layer when it's not supposed to.  I could do it this way, but just have each screen layer point render to its own screen layer surface then alpha blend the layers together afterwards.  I thought full screen alpha blending was expensive though.  Chances are, I won't need post processing effects on my HUD layer but it's something to think about if I do.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0