Jump to content

  • Log In with Google      Sign In   
  • Create Account

Is Clustered Forward Shading worth implementing?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
46 replies to this topic

#1 mrheisenberg   Members   -  Reputation: 356

Like
3Likes
Like

Posted 08 January 2013 - 11:56 PM

I'm referring to this: http://www.cse.chalmers.se/~uffe/clustered_shading_preprint.pdf there is also a video avaliable  the performance of this technique seems to scale perfectly for huge amounts of lights,but on lower amounts performs a little worse than the less advanced tiled culling method.The thing is - has there ever been a case where you will need 30 thousand lights in a scene?Plus,won't it get bottlenecked by generating shadow maps for all the lights(in the youtube video the lights just pass trough the bridge and under it).Unfortunately I couldn't test it's performance,because for some reason the provided demo won't start up(even tho I support OpenGL 3 and higher) and I've never done GLSL,so it might take time to get it to work.



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31984

Like
4Likes
Like

Posted 09 January 2013 - 12:11 AM

I wouldn't implement it unless you're planning on using the types of scenes where it's shown to perform really well wink.png

But clustered-deferred seems to perform better than clustered-forward for scenes with tens of thousands of lights.

 

The thing is - has there ever been a case where you will need 30 thousand lights in a scene?

There is a global-illumination technique where you use reflective shadow maps to generate thousands of virtual point lights from every regular light source, which could easily create 30K lights in a scene.

 

Plus,won't it get bottlenecked by generating shadow maps for all the lights

If you need to generate shadow-maps en masse, you could use imperfect shadow maps to render thousands of shadow-maps simultaneously.


Edited by Hodgman, 09 January 2013 - 12:13 AM.


#3 zeGouky   Members   -  Reputation: 216

Like
2Likes
Like

Posted 09 January 2013 - 04:31 AM

One thing I like about the tiled Clustered is that it become "cheaper" to handle transparent object. In the case of tiled deferred you have to build 2 lists, one that used the depth buffer for the light culling and one without. So yo can have a massive overhead on the transparent pass. With clustered 1 culling is necessary.

 

But again that depend of the light count (also clustered is a heavier in term of memory size if I'm correct) and the scene.

 

Also at the Siggraph Asia , they were a presentation about a 2.5D culling techinque that you can find here : https://sites.google.com/site/takahiroharada/



#4 Chris_F   Members   -  Reputation: 2467

Like
0Likes
Like

Posted 09 January 2013 - 06:10 AM

Lots of lights + forward shading = sign me up.



#5 mrheisenberg   Members   -  Reputation: 356

Like
2Likes
Like

Posted 09 January 2013 - 07:27 AM

http://www.cse.chalmers.se/~olaolss/get_file.php?filename=clustered_forward_demo.zip here's the demo,if you don't trust the direct link you can get it here http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=tiled_clustered_forward_talk



#6 Krypt0n   Crossbones+   -  Reputation: 2685

Like
3Likes
Like

Posted 09 January 2013 - 11:09 AM

deferred shading is really unhandy when it comes to anti aliasing and lighting transparent objects is not solved in this approach.

forward shading is the way to go, I expect in the next generation consoles to go back to it. I use a similar approach on my phone engines, I've a view space aligned 3d grid (texture) that has a 'count' and 'offset' value per voxel, that I use to index into a texture containing the light sources that affect that voxel. the grid creation is done every frame on CPU, I don't have 30k of lights, but I run with antialiasing, I use the same shader for solid and transparent objects, very convenient to use, I can even assign this texture on the vertexshader for lighting particles in a cheap way.

 

one problem you still have is to apply shadows/projectors, it's solveable by having an atlas and store more data per lightsource (projection matrix, offsets,extends etc), but it makes quite a lot of overhead.



#7 Matias Goldberg   Crossbones+   -  Reputation: 3723

Like
4Likes
Like

Posted 09 January 2013 - 12:01 PM

Forward+ is the new rave.

 

It allows MSAA, transparency, multiple brdfs, and most applications end up being faster than tile based deferred. The only caveat is that if you're vertex shader bound (or cpu bound), that extra early z pass will hurt you. You can avoid it, but then you will have to limit the ammount of lights in the scene because you can't depth-cull it (but at least you can cull them per tile). Also you'll have to evaluate if stream out is viable to reuse processed vertices and save CPU & Vertex Shader (at the cost of memory & bandwidth).

 

Note that Forward+ (aka Clustered Forward, Light Indexed Deferred) is a very new topic and there's a lot of research coming up this year.

 

Must reads:

Light Indexed Deferred Rendering, Matt Pettineo, 2012
http://mynameismjp.wordpress.com/2012/03/31/light-indexed-deferred-rendering/

A 2.5D CULLING FOR FORWARD+ AMD, Takahiro Harada, 2012

https://sites.google.com/site/takahiroharada/storage/2012SA_2.5DCulling.pdf?attredirects=0

Clustered Deferred and Forward Shading, Olsson, Billeter, Assarsson, 2012
http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=clustered_shading



#8 mrheisenberg   Members   -  Reputation: 356

Like
2Likes
Like

Posted 09 January 2013 - 12:57 PM

the Z-prepass worries me,does that mean I have to do the tessellation twice as well?(tessellation already hits my FPS big time)


Edited by mrheisenberg, 09 January 2013 - 01:00 PM.


#9 Krypt0n   Crossbones+   -  Reputation: 2685

Like
3Likes
Like

Posted 09 January 2013 - 02:13 PM

you can also try to sort front to back instead, if you are vertex bound, that might give you better results. another approach is to use occluder object, you can get 90% of the culling as with zprepass, yet without the cost.

but tesselated geometry has another problem, you cover a lot of pixel just partially when AA is enabled, that increases the costs a lot in the pixelshader. something like POM might scale way better.

#10 mrheisenberg   Members   -  Reputation: 356

Like
3Likes
Like

Posted 09 January 2013 - 02:34 PM

how much vRAM do your GBuffers usually take up?GPU-Z tells me with 8xMSAA that mine takes around 350mb just for a position,color,normal,specular buffer.



#11 Frenetic Pony   Members   -  Reputation: 1407

Like
0Likes
Like

Posted 09 January 2013 - 03:32 PM

deferred shading is really unhandy when it comes to anti aliasing and lighting transparent objects is not solved in this approach.

forward shading is the way to go, I expect in the next generation consoles to go back to it. I use a similar approach on my phone engines, I've a view space aligned 3d grid (texture) that has a 'count' and 'offset' value per voxel, that I use to index into a texture containing the light sources that affect that voxel. the grid creation is done every frame on CPU, I don't have 30k of lights, but I run with antialiasing, I use the same shader for solid and transparent objects, very convenient to use, I can even assign this texture on the vertexshader for lighting particles in a cheap way.

 

one problem you still have is to apply shadows/projectors, it's solveable by having an atlas and store more data per lightsource (projection matrix, offsets,extends etc), but it makes quite a lot of overhead.

 

Many have solved transparency with deferred, Epic and Avalanche among them. Anti Aliasing is also doable. Multiple BRDF's are handled straightforward in deferred. You also have direct access to all those buffers should you need anything, and don't have to worry about processing and pixels you can't see it. And most modern hardware, including the 4th Gen Ipad and Tegra 4 from what I've heard, have enough bandwidth and memory to get some sort of deferred done, though if you're doing thousand and thousands of lights mobile probably isn't your target platform anyway.

 

I'd rather make sure there's not any unnecessary shading going on. Of course you can't do 8xMSAA with deferred, at least not cheaply, but you can do something like SMAA, which looks just as good and is cheaper in any case. I suppose it's all based on what you'd like to be doing. If you've got the time for it, and are on the right platform (new consoles, high end pc stuff) then I don't see any reason not to go deferred. If you don't have the time to solve all those problems, or somethings I'm probably not even thinking of, then forward might be your solution. But calling out all the old problems with deferred isn't relevant, as they've been solved for most part.



#12 Matias Goldberg   Crossbones+   -  Reputation: 3723

Like
0Likes
Like

Posted 09 January 2013 - 06:25 PM

Many have solved transparency with deferred, Epic and Avalanche among them. Anti Aliasing is also doable. Multiple BRDF's are handled straightforward in deferred. You also have direct access to all those buffers should you need anything, and don't have to worry about processing and pixels you can't see it. And most modern hardware, including the 4th Gen Ipad and Tegra 4 from what I've heard, have enough bandwidth and memory to get some sort of deferred done, though if you're doing thousand and thousands of lights mobile probably isn't your target platform anyway.

I don't remember Avalanche using Deferred Shading in it's titles. Which titles do use it?

 

Handling transparency... nice way of saying "solved". Switching to forward is not a "solution", neither is using lighting accumulative aproaches. It's a workaround. Anti aliasing is doable, but at a gigantic cost. I'm talking about MSAA and CSAA (SSAA is always expensive). Not about "FXAA" & Co. which is a cheap trick.

As for multiple BRDFs, it's not straightforward in deferred. It needs an extra cost in the MRT to store material ID, and you either use branching in your code and pray for high branch coherency (low frequency image) to get the best BRDFs (Cook Torrance, Oren Nayar, Phong, Blinn Phong, Strauss, etc) at decent speed, or resort to texture array approaches (which produce very interesting/creative results that I love, but aren't optimal for those seeking photorealism).

 

So, no, I wouldn't call the old deferred problems as "solved".



#13 Hodgman   Moderators   -  Reputation: 31984

Like
2Likes
Like

Posted 09 January 2013 - 07:49 PM

I don't see any reason not to go deferred

Forward vs Deferred arguments are silly and useless out of context, because different games are better suited to different pipelines. There is no one-pipeline-to-rule-them-all, and as a side-rant: any engine that lists "deferred shading" on it's feature list is missing the point (an engine should give you the tools to build different pipelines, and a deferred rendering pipe should be in the engine samples/examples, not the core).

 

There's still many games shipping today that use "traditional forward" rendering, and almost every game is a hybrid, where some calculations are deferred and others aren't.
Choosing where to put calculations in your graphics pipeline is an optimization problem, which means it's unsolvable except in the context of your particular data.

 

e.g. on my last game, we calculated shadow data in screen-space for some objects (Deferred Shadow Maps), and also used deferred decals, then forward rendered everything, then calculated shadow data in screen-space for some other objects, then applied these 2nd shadow results to the forward-rendered lighting data to get the final lighting buffer.

That's not traditional forward or deferred rendering. Vanilla doesn't work for most games.

 

Note that Forward+ (aka Clustered Forward, Light Indexed Deferred) is a very new topic and there's a lot of research coming up this year.

The original version (light-indexed deferred) has actually been around for 5 years or so, and is even very easy to implement on DX9! However, DX11 has made these kinds of forward renderers easier and more efficient to implement with less restrictions too, so the idea is making a big comeback wink.png


Edited by Hodgman, 09 January 2013 - 08:01 PM.


#14 Krypt0n   Crossbones+   -  Reputation: 2685

Like
1Likes
Like

Posted 14 January 2013 - 07:47 AM

the reason a lot of games went deferred is that it's not possible on current consoles to go forward. dynamic branching etc. would just kill you, and you don't really have benefits of it as most games are not rendering insane AA resolutions. that might change on future gen, they'll probably be very alike to PCs and there you don't worry about branching, but you want to support high AA resolutions without paying the cost of shading every sub sample.

 

so the question whether you go deferred or forward is also very much dependent on what your hardware has to offer (beside the question of what you're trying to achive).



#15 Hodgman   Moderators   -  Reputation: 31984

Like
5Likes
Like

Posted 14 January 2013 - 07:51 AM

the reason a lot of games went deferred is that it's not possible on current consoles to go forward.

Many current-gen console games are forward, and forward has stuck around because it's very hard to go deferred on current-gen consoles... The amount of bandwidth required kills you. Even 16-bit HDR (64bpp) is a huge burden on these consoles.



#16 Krypt0n   Crossbones+   -  Reputation: 2685

Like
1Likes
Like

Posted 14 January 2013 - 03:49 PM

the reason a lot of games went deferred is that it's not possible on current consoles to go forward.

Many current-gen console games are forward, and forward has stuck around because it's very hard to go deferred on current-gen consoles... The amount of bandwidth required kills you. Even 16-bit HDR (64bpp) is a huge burden on these consoles.

the more advanced games are, the more likely they become deferred, the reason is that it's not possible to get the amount of light-surface interactions with forward rendering in a fast way. as you said, it would seem deferred is more demanding, yet it's the only way to go if you want flexibility.



#17 phantom   Moderators   -  Reputation: 7593

Like
2Likes
Like

Posted 14 January 2013 - 05:17 PM

Not really; deferred might have solved some problems with regards to lights but it brought with it a whole host of others with regards to memory bandwidth, AA issues, problems integrating different BRDFs, transparency and other issues which required various hoops to be jumped through.

Going forward hybrid solutions are likely to become the norm, such as AMD's Leo demo which mixes deferred aspects with a forward rendering pass to do the real geometry rendering which can get around pretty much all of those problems (but brings its own compromises).

The point is; all rendering has trade offs and you'll find plenty of "advanced" engines which use various rendering methods - hell, the last game I worked on was all forward lit using baked lighting and SH light probes because it was the only way we were going to hit 60fps on the consoles.

Edit: also a good and advanced engine WONT force you to take one rendering path, it will let the game code decide (the engine powering the aforementioned game can support deferred as well as forward at least...)

Edited by phantom, 14 January 2013 - 05:41 PM.


#18 Hodgman   Moderators   -  Reputation: 31984

Like
2Likes
Like

Posted 14 January 2013 - 05:59 PM

the more advanced games are, the more likely they become deferred, the reason is that it's not possible to get the amount of light-surface interactions with forward rendering in a fast way. as you said, it would seem deferred is more demanding, yet it's the only way to go if you want flexibility.

 
What's 'advanced' mean? Huge numbers of dynamic lights? You can do just as many lights with forward as long as you've got a decent way of solving the classic issue of determining which objects are affected by which lights. Actually, the whole point of tiled-deferred was that it was trying to reduce lighting bandwidth back down to what we had with forward rendering, while keeping the "which light for which object" calculations in screen-space on the GPU.
 
If your environment is static, then you can bake all the lighting (and probes) and it'll be a ton faster than any other approach! wink.png
Most console games are still using static, baked lighting for most of the scene, which reduces the need for huge dynamic light counts.
 
Another issue with deferred is that it's very hard to do at full 720p on the 360. The 360 only has 10MiB of EDRAM, where your frame-buffers have to live. Let's say you optimize your G-buffer layout so you've got hardware depth/stencil, and two 8888 targets -- that's 3 * 4bpp * 1280*720, or ~10.5MiB -- that's over the limit and won't fit.

n.b. these numbers are the same as depth/stencil + FP16_16_16_16, which also makes forward rendering or deferred light accumulation difficult in HDR... wacko.png 

Sure, Crysis, Battlefield 3 and Killzone are deferred, but there's probably many more games that use forward rendering, even "AAA" games, like Gears of War (and most other Unreal games), L4D2 (and other Source games), God of War, etc... Then there's the games that have gone deferred-lighting (LPP) as a half-way choice, such as GTA4 (or many rockstar games), Space Marine, etc...
 
Regarding materials, forward is unarguably more flexible -- each object can have unique BRDFs, unique lighting models, and any number of lights. It's just inefficient if you've got lots of small objects (due to shader swapping overhead and bad quad efficiency), or lots of big objects (due to the "which light for which object" calculations being done per-object).
Actually, you mentioned dynamic branches before, but forward rendering doesn't need any; all branches should be able to be determined at compile time. On the other hand, implementing multiple BRDFs in a deferred renderer requires some form of branching (or look-up-tables, which are just as bad).
 
Also, tiled-deferred and tiled-forward are implementable on current-gen hardware (even DX9 PC if you're careful), so there's no reason we won't see it soon wink.png

As usual, there's no single objectively better pipeline; different games have different requirements, which are more efficiently met with one pipeline or another...


Edited by Hodgman, 14 January 2013 - 09:12 PM.


#19 RobMaddison   Members   -  Reputation: 778

Like
1Likes
Like

Posted 15 January 2013 - 01:11 AM

A little off topic but still on topic, does anyone have any links to good tutorials on deferred vs forward rendering? I've read a fair bit about the detail on deferred but would rather get a good grounding on it before look into it further - couldn't find any decent sites with 'why deferred' other than 'you can have more lights'.

Apologies for borrowing this thread quickly...

#20 Krypt0n   Crossbones+   -  Reputation: 2685

Like
1Likes
Like

Posted 15 January 2013 - 05:23 AM

Not really; deferred might have solved some problems with regards to lights but it brought with it a whole host of others with regards to memory bandwidth, AA issues, problems integrating different BRDFs, transparency and other issues which required various hoops to be jumped through.

exactly, one would think, having no MSAA (for shading), no solution for alphablend, problems with getting different BRDFs running, high memory storage and bandwidth cost, why on earth would anyone do that.

simply because the current gen console hardware does not offer another solution to create worlds that player, designer and artist expect, where you have tons of dynamic lights, where even particles light the close-by geometry.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS