Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualHodgman

Posted 29 November 2012 - 05:44 PM

Thanks for the replies, I remember using that WEKA software back in university!

@jwezorek, I ended up deciding it was a premature optimisation and basically limiting 'clusters' to the size of a cell Posted Image

In terms of the renderer that this was going to be used with -- The CPU collects a list of all lights that overlap each tile in screen space, and then these lists are broken down into groups of 8 (or less) light ID's per tile per pass. The GPU then takes each 'pass' of IDs (up to 8) and checks if those lights actually affect their tile (this time based on min/max depth, which the CPU didn't know), and outputs a compacted list of IDs (with the non-visible lights removed). Then when lighting each pass, the tile is discarded if the compacted list is empty, otherwise it loops through the (possibly shortened) list and does the deferred shading logic for up to 8 lights at once.

#1Hodgman

Posted 29 November 2012 - 05:40 PM

Thanks for the replies, I remember using that WEKA software back in university!

@jwezorek, I ended up deciding it was a premature optimisation and basically making one cluster per cell Posted Image

In terms of the renderer that this was going to be used with -- The CPU collects a list of all lights that overlap each tile in screen space, and then these lists are broken down into groups of 8 (or less) light ID's per tile per pass. The GPU then takes each 'pass' of IDs (up to 8) and checks if those lights actually affect their tile (this time based on min/max depth, which the CPU didn't know), and outputs a compacted list of IDs. Then when lighting each pass, the tile is discarded if the compacted list is empty, otherwise it loops through the (possibly shortened) list and does the deferred shading logic for up to 8 lights at once.

PARTNERS