Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Mar 2012
Offline Last Active Apr 13 2015 01:40 AM

Posts I've Made

In Topic: Difference Tiled and Clustered shading

13 December 2013 - 02:59 PM

One probleme is to store light indices since the array is not always the same size and using the "max theorically" the memory size is pretty impossible.

So using a worst case must be used but, if we say we allow 1024 point lights and 1024 spot lights, a good pool is 4*1024*1024 ?


The way we implemented it is to allocate a 'reasonable' buffer and then to grow it when (if) needed.

I think Emil covered how they deal with this in our talk from Siggraph this year:

'Practical Clustered Deferred and Forward Shading'


This talk should provide a few insights into both the general gist of the algorithm and the practical implementation at Avalanche.


Hope it helps.


In Topic: Is Clustered Forward Shading worth implementing?

20 January 2013 - 02:14 PM

Note that Forward+ (aka Clustered Forward, Light Indexed Deferred) is a very new topic and there's a lot of research coming up this year.

Now, just because I'd hate for this to turn into another deferred lighting / shading terminology kerfuffle:


Tiled Forward <=> Forward+, these use 2D tiling (same as Tiled Deferred), with a pre-z pass (optional) + separate geometry pass for shading.

Light Indexed Deferred, Builds the lists per pixel, which can be viewed as a 1x1 tile, and then it is really the same as Tiled Forward. The practical difference is pretty big, though...

Clustered Forward, performs tiling in 3D (or higher). othwewise as above.

Tiled/Clustered Deferred Shading, do tiling as their forward counterparts, but start with a G-Buffer pass and end with a deferred shading pass.


Hope this clears up, and/or prevents, some confusion.

In Topic: Revival of Forward Rending?

02 April 2012 - 05:51 PM

He didn't want to do a reduction because of the extra shared memory pressure that it would add (which makes sense, considering he was already using quite a bit of shared memory for the light list + list of MSAA pixels), but it might be worth it if you're just outputting a light list for forward rendering.

In my implementation I always build the grid in a separate pass. It's a fairly trivial ammount of extra bandwidth and you remove shared memory limitations, and it is inherently more flexible. I implemented lauritzens single kernel version too, more or less a straight port, but with parallel depth reduction, (was significant at least on a gtx 280), it did not perform as well (but was only fairly marginally slower).

I wouldn't expect very big gains since the light/tile intersection tends to be a small portion of the frame time, but it could definitely be an improvement.

Well, since you are brute forcing (lights vs tiles) you just need to ramp the lights up, and voila it'll become an issue sooner or later. This is also highly (light) overdraw dependent, so I think the portion of frame time can vary quite a bit. Sorry to say, I can't run your demo, because I've only got access to a windows xp machine at the moment, so I cant offer any comments based on how your setup looks.

Everybody always just does point lights in their demos. Posted Image

Yes, guilty as charged... damn those paper deadlines :)

In Topic: Revival of Forward Rending?

02 April 2012 - 03:20 AM

It would be the same as a normal forward lighting system; render transparent objects back to front. You'd just get early rejection for objects which are behind the layed down z-pass.

Just note that the restriction applies to lights as well, so when you build the grid you can only reject lights entirely behind the scene (only use max depth). Obiously one could elaborate on this with a min depth buffer, but before you know it we'll have implemented depth peeling :)

Otherwise I think the fact that you can reuse the entire pipeline including shader functions to access the grid, is one of the really strong features of the tiled deferred-forward combo. Easy to to tiled deferred for opaque objects, and then add a tiled forward for transparent, if that is what works. It is very easy to move between tiled deferred and forward shading, and this got to be good for compatibility/scaling/adapting to platforms.

In Topic: Revival of Forward Rending?

02 April 2012 - 01:55 AM

...If you didn't do this you could build a list of lights just using the near/far planes of the camera, but I would suspect that the larger light lists + lack of good early z cull would cause performance to go right down the drain.

I did look at that in my paper 'Tiled Shading' that someone posted a link to above. And the short answer is that no indeed, it does not end too well.

On the other hand, I imagine that it would be a useful technique simply to manage lights in an environment with not too many lights in any given location and limited views (e.g. RTS camera or so), as the limited depth span makes the depth range optimization less effective.

I've got an open gl demo too, which builds grids entirely on the CPU (so it's not very high performance, just there to demo the techniques).

Btw, one thing that could may affect your results that I noticed is that you make use of atomics to reduce the min/max depth. Shared memory atomics on NVIDIA hardware serialize on conflicts, so to use them to perform a reduction this way is less efficient than just using a single thread in the CTA to do the work (at least then you dont have to run the conflict detection steps involved). So this step gets a lot faster with a SIMD parallel reduction, which is fairly straight forward, dont have time to dig out a good link sorry, I'll just post a cuda variant I've got handy, for 32 threads (a warp), but scales up with apropriate barrier syncs, sdata is a pointer to a 32 element shared memory buffer (is that local memory in compute shader lingo? Anyway, the on-chip variety.).

uint32_t warpReduce(uint32_t data, uint32_t index, volatile uint32_t *sdata)
  unsigned int tid = index;
  sdata[tid] = data;
  if (tid < 16)
    sdata[tid] += sdata[tid + 16];
    sdata[tid] += sdata[tid +  8];
    sdata[tid] += sdata[tid +  4];
    sdata[tid] += sdata[tid +  2];
    sdata[tid] += sdata[tid +  1];
  return sdata[0];

Same goes for the list building, where a prefix sum could be used. Here it'd depend on the rate of collisions. Anyway, thinking this might be a difference between NVIDIA and AMD (Where I don't have a clue how atomics are implemented).

As a side note, it's much more efficient to work out the screen space bounds of each light before running the per tile checks, saves constructing identical planes for tens of tiles, etc.

Anyway, fun to see some activity on this topic! And I'm surprised at the good results for tiled forward.