Deferred Rendering: How to go about using specialized shaders?

Started by
6 comments, last by hannesp 7 years, 4 months ago

This sounds stupid, but I'm in the process of revising my current deferred rendering to better suit my game's needs. Currently, the geometry is Kitbashed, so it can easily exist in GPU memory indefinitely. However, in the end I'll use a lot of textures. The biggest performance impact appears to be coming from the fact that I need to rebind my textures on the Gbuffer pass pretty often. Even with sorting.

I decided to try a different approach, instead of using a diffuse, normal, and what have you pass...

I'd instead draw UVs, and Material IDs to the buffer. Similar to what is done for Bindless Texturing in Directx 12 and Vulkan.

But the problem I'm having to wrap my head around is how do you go about using specialized shaders?

For example, with a normal deferred pipeline, you can just render your junk per object. If each object had a unique shader, all is fine... just output the corresponding results to the correct buffers.

For example, you got a random floating oblisk with runes that have a pulsing glow. Or perhaps you have an object with light strips that move along a trim.

But with this techinque... I don't see a way to correctly do this.

You can have a massive uber shader and branching... which seems like a bad idea.
You could run as many passes as you have materials... but that would get costly.

You could have a limited set of materials... but then you loose flexibility.

Kinda wish that you had the ability to dynamically swap shaders during execution at this point.

Advertisement

What most engines do is take the Tiled Deferred Lighting approach, or methods like it. After rendering all of your geometry to the GBuffer.. run an "apply pass". This pass will read the "material id" from the gbuffer.. and depending on the material id.. it will run the correct function. "http://www.frostbite.com/wp-content/uploads/2014/11/course_notes_moving_frostbite_to_pbr.pdf" goes into how they take the approach... and engines like Unreal and Naughty Dog take the same approach. "http://advances.realtimerendering.com/s2016/" <- has information from Naughty Dog Uncharted 4 rendering pipeline.

The naughty dog method is pretty f*cking clever.

I didn't think about actually saving tiles of pixels into a look up table. That might be just how I go about it then.

Soo... theoretically, it turns from... the number of objects in your scene... to basically what might just be however dense your pixel space is.

However, I don't think I quite fully understand what they did. It seems like they were trying to avoid an Ubershader... but might have still ended up with one...

I'll first try and see if I can cheat my way through by keeping a table of tiles that needs to be lit via a specific shader, then go through each shader needed to light the scene.

Well if you are taking a "MaterialID" approach, you either have the Uber Shader approach, which is the easiest... or doing tile binning based on the most relevant shading model in the region. In all honestly , I would just go with the UberShader approach, realistically speaking you aren't going to have that many shading models in a single game. You may have a default surface, then skin, fur, etc... And if you do have many shading models, i.e more than 5, you might as well skip the material id approach and go with what Unity does, which is standard deferred.

You can also just use forward rendering for any 'special' materials - quite a few games do that.

Well if you are taking a "MaterialID" approach, you either have the Uber Shader approach, which is the easiest... or doing tile binning based on the most relevant shading model in the region. In all honestly , I would just go with the UberShader approach, realistically speaking you aren't going to have that many shading models in a single game. You may have a default surface, then skin, fur, etc... And if you do have many shading models, i.e more than 5, you might as well skip the material id approach and go with what Unity does, which is standard deferred.

For example, UE4 already has support for 5+ shading models, all of which you could conceivably run in the right setting. Saving time by not running all of them can become critical. This is pretty much why Naughty Dog came up with what they did to begin with.

You can also just use forward rendering for any 'special' materials - quite a few games do that.

I'm not sure if this is less of a pain than a clever binning/LUT scheme like Naught Dog's though.

Or alternatively, if you're really approaching a project from a long term approach, ditch forward and deferred and go for a visibility buffer. This is basically something further along the lines of what you're already thinking about, but you cna get a pretty much arbitrary number of custom material types EG: https://community.eidosmontreal.com/blog/next-gen-dawn-engine

I

Well if you are taking a "MaterialID" approach, you either have the Uber Shader approach, which is the easiest... or doing tile binning based on the most relevant shading model in the region. In all honestly , I would just go with the UberShader approach, realistically speaking you aren't going to have that many shading models in a single game. You may have a default surface, then skin, fur, etc... And if you do have many shading models, i.e more than 5, you might as well skip the material id approach and go with what Unity does, which is standard deferred.

For example, UE4 already has support for 5+ shading models, all of which you could conceivably run in the right setting. Saving time by not running all of them can become critical.

You can also just use forward rendering for any 'special' materials - quite a few games do that.

I'm not sure if this is less of a pain than a clever binning/LUT scheme like Naught Dog's though.

Or alternatively, if you're really approaching a project from a long term approach, ditch forward and deferred and go for a visibility buffer. This is basically something further along the lines of what you're already thinking about, but you cna get a pretty much arbitrary number of custom material types EG: https://community.eidosmontreal.com/blog/next-gen-dawn-engine

I

Yeah, I was just giving a random number for a switch, in reality the Uber Shader approach is the correct approach in all instances, unless you have a shader that for some reason can't be emulated in an apply pass

Why isn't anybody here talking about using subroutines (OpenGL, don't know if there's an equivalent in other APIs)? I think this approach could easily expand to per-object-shaders - more flexible than material types, easier to implement than the naughty dog approach. http://www.openglsuperbible.com/2013/10/16/the-road-to-one-million-draws/#return-note-403-1

Would be nice if anyone has experience to share.

This topic is closed to new replies.

Advertisement