Revival of Forward Rending?

Started by
33 comments, last by Hodgman 12 years ago
Yeah, given that ALU power continues to increase while bandwidth doesn't the removal of that constraint is a very nice factor for LI.

I tried to spin the app up in AMD's GPU PerfStudio last night but it became a crash fest :( Wanted to see what kind of utilisation the various stages were getting on the ALUs and bandwidth wise.
Advertisement
How easy would it be to implement transparency in a LI renderer? The z prepass suffers from the same issue as a geometry pass in a deferred shading renderer.
It would be the same as a normal forward lighting system; render transparent objects back to front. You'd just get early rejection for objects which are behind the layed down z-pass.

The other option is an order independant transparency system but those are still pretty heavy on the hardware it would seem.

It would be the same as a normal forward lighting system; render transparent objects back to front. You'd just get early rejection for objects which are behind the layed down z-pass.


Just note that the restriction applies to lights as well, so when you build the grid you can only reject lights entirely behind the scene (only use max depth). Obiously one could elaborate on this with a min depth buffer, but before you know it we'll have implemented depth peeling :)

Otherwise I think the fact that you can reuse the entire pipeline including shader functions to access the grid, is one of the really strong features of the tiled deferred-forward combo. Easy to to tiled deferred for opaque objects, and then add a tiled forward for transparent, if that is what works. It is very easy to move between tiled deferred and forward shading, and this got to be good for compatibility/scaling/adapting to platforms.

Just note that the restriction applies to lights as well, so when you build the grid you can only reject lights entirely behind the scene (only use max depth). Obiously one could elaborate on this with a min depth buffer, but before you know it we'll have implemented depth peeling smile.png

Otherwise I think the fact that you can reuse the entire pipeline including shader functions to access the grid, is one of the really strong features of the tiled deferred-forward combo. Easy to to tiled deferred for opaque objects, and then add a tiled forward for transparent, if that is what works. It is very easy to move between tiled deferred and forward shading, and this got to be good for compatibility/scaling/adapting to platforms.


But then you loose the benefit of easily being able to use different BRDFs, unless you resort to storing material ID in your gbuffer and then use branching in your lighting shader.

Btw, one thing that could may affect your results that I noticed is that you make use of atomics to reduce the min/max depth. Shared memory atomics on NVIDIA hardware serialize on conflicts, so to use them to perform a reduction this way is less efficient than just using a single thread in the CTA to do the work (at least then you dont have to run the conflict detection steps involved). So this step gets a lot faster with a SIMD parallel reduction, which is fairly straight forward, dont have time to dig out a good link sorry, I'll just post a cuda variant I've got handy, for 32 threads (a warp), but scales up with apropriate barrier syncs, sdata is a pointer to a 32 element shared memory buffer (is that local memory in compute shader lingo? Anyway, the on-chip variety.).


Yeah, most of the code for the light/tile intersection was copied straight from Andrew Lauritzen's sample. He didn't want to do a reduction because of the extra shared memory pressure that it would add (which makes sense, considering he was already using quite a bit of shared memory for the light list + list of MSAA pixels), but it might be worth it if you're just outputting a light list for forward rendering. I wouldn't expect very big gains since the light/tile intersection tends to be a small portion of the frame time, but it could definitely be an improvement. Maybe if I have some more free time soon (not likely) I'll give it a shot. I would also definitely like to play around with more optimized intersection tests, especially for spotlights. Everybody always just does point lights in their demos. tongue.png

He didn't want to do a reduction because of the extra shared memory pressure that it would add (which makes sense, considering he was already using quite a bit of shared memory for the light list + list of MSAA pixels), but it might be worth it if you're just outputting a light list for forward rendering.

In my implementation I always build the grid in a separate pass. It's a fairly trivial ammount of extra bandwidth and you remove shared memory limitations, and it is inherently more flexible. I implemented lauritzens single kernel version too, more or less a straight port, but with parallel depth reduction, (was significant at least on a gtx 280), it did not perform as well (but was only fairly marginally slower).


I wouldn't expect very big gains since the light/tile intersection tends to be a small portion of the frame time, but it could definitely be an improvement.

Well, since you are brute forcing (lights vs tiles) you just need to ramp the lights up, and voila it'll become an issue sooner or later. This is also highly (light) overdraw dependent, so I think the portion of frame time can vary quite a bit. Sorry to say, I can't run your demo, because I've only got access to a windows xp machine at the moment, so I cant offer any comments based on how your setup looks.


Everybody always just does point lights in their demos. tongue.png

Yes, guilty as charged... damn those paper deadlines :)
But then you loose the benefit of easily being able to use different BRDFs, unless you resort to storing material ID in your gbuffer and then use branching in your lighting shader.


Like I said, why would we particularly care with the upcoming consoles? I, personally, would rather throw ultra high poly meshes and render individual leaves and a hundred shadow mapped lights with ultra high radius's (radii?) than storing however many bdrfs someone might go crazy with. So far as I'm concerned the former will require less art asset creation AND make level designers and modelers happier.

[quote name='Chris_F' timestamp='1333359199' post='4927417'] But then you loose the benefit of easily being able to use different BRDFs, unless you resort to storing material ID in your gbuffer and then use branching in your lighting shader.


Like I said, why would we particularly care with the upcoming consoles? I, personally, would rather throw ultra high poly meshes and render individual leaves and a hundred shadow mapped lights with ultra high radius's (radii?) than storing however many bdrfs someone might go crazy with. So far as I'm concerned the former will require less art asset creation AND make level designers and modelers happier.
[/quote]

The problem is that they still won't look like leaves, though. Even super-dumb N dot E hacks do an absolutely insane amount to 'sell' the substance of a virtual material. Compare to how fake everything looks when you remove that (even just a stupid change in lighting!) from the equation. (hurp durp that was a punne, or a play on words)

Lack of anisotropic shading on hair will also drag the overall perceived realism down in dramatic, screaming fashion.

In short, I think an extensive library of believable materials is more important than most would think.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

In short, I think an extensive library of believable materials is more important than most would think.


And I think many artists would agree... of course at that point you would have to stop them going crazy with materials and causing a draw call explosion but with a decent stick you can train that out of them ;)

This topic is closed to new replies.

Advertisement