Jump to content

  • Log In with Google      Sign In   
  • Create Account

Frenetic Pony

Member Since 30 Oct 2011
Offline Last Active Today, 04:28 AM

Posts I've Made

In Topic: Occlusion Culling - Combine OctTree with CHC algorithm?

17 December 2014 - 09:19 PM

Considering the game, or whatever, sounds highly dynamic occlusion planes don't seem like an option. Besides, avoiding developer overhead is always a good idea, manually setting up occlusion is just more work.


Depth occlusion on the other hand is a very useful tool, and indeed used by many things.  There's also more complex options, DICE was helpful enough to put this out: http://www.slideshare.net/DICEStudio/culling-the-battlefield-data-oriented-design-in-practice and their requirements sound similar to yours in some way.


As for culling point light shadow maps, draw calls are often a bottleneck here along with polycount. "Efficient virtual shadow maps for many lights" extends clustered light culling, the current hotness for realtime lighting optimization anyway, to culling for shadow maps: http://www.cse.chalmers.se/~uffe/ClusteredWithShadows.pdf Unfortunately there's also an overhead to this extension, meaning you'd have to take into consideration just how many shadow maps you want going at once before considering the above.

In Topic: Radiance Question

14 December 2014 - 01:33 AM

Radiance "the flux of radiation emitted per unit solid angle in a given direction by a unit area of a source."


A sources emitted photons don't change with distance. The reflected radiance from a source, which can be called irradiance, assuming it's a point source so it's nice and easy, is the nice easy inverse square. I.E. L = 1/D^2.


Be careful with terms like radiance and irradiance and etc. Inverse square is still correct, but the terms often get all tangled up, causes a lot of people to get confused. I mean, radiance and irradiance aren't really self evident terms, and then there's radiosity and blah blah blah, and different fields will use different terms for the same thing. It would be nice if there was just a single set of agreed upon common terms that are all self evident, but there's not. unsure.png  

In Topic: automatic light probes positioning

10 November 2014 - 05:50 PM

The initial voxel cone tracing stuff had realtime generation of sparse voxel octrees as a necessity, take a look at that stuff for hopefully much faster octree generation: https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&es_th=1&ie=UTF-8#q=sparse%20voxel%20octree


Sparse octrees are close to suiting light probe placement, thought not perfect. Some hacks would probably get you there, or hopefully just give you an idea for speeding up AMD's technique. You could also LOD out lightprobes to the bigger voxel sizes as your player gets farther away, a kind of cascaded detail for light probes. Probably have to be careful how you blend in new, smaller voxel/higher detail probes though to avoid light popping. 

In Topic: Rendering clothes

07 November 2014 - 03:41 AM

I would go with your first idea and split your mesh up into manageable chunks:


Then not only are you not drawing your player twice, you're ensuring you won't get any skin coming through clothing. I would imagine your base player mesh would be naked, then you can cater for someone wearing jeans with no top - if your application requires that kind of thing.

This is what I'm planning to do for my character


Variations on this are how pretty much everyone does it, so yes indeed would recommend.

In Topic: Help with GPU Pro 5 Hi-Z Screen Space Reflections

21 September 2014 - 04:39 PM

I bought the book thinking i would get access to the source for this, damn it. Great work going on in here, Suggestion - Mixing this technique as is having to be reverse engineered anyway why not think about crossing with idea's from Deep G Buffers for fast screen space GI and AO http://graphics.cs.williams.edu/papers/DeepGBuffer14/ . Morgans paper is brilliant, he's also provided base code for the system. Screen space cone traced reflections mixed with Deep G Buffers would be a superb step in the right direction for near to offline results (as we can get right now anyway).


Guys take a look. Cheers


Eh, that's a lot of extra scene complexity dependent work for not a lot of benefit. SSR already only looks good if you've a specific art direction, are blending it with cubemaps/other pre-computed reflections, or only use it for specific and controlled materials like a rough, partially reflective flat floor or puddles. Otherwise you get extreme temporal instability and it just ends up looking weird.


Great work on sussing out the details of this though! I'd been eyeing the book just for this article, but it's good to know that it wasn't terribly complete and B. that there's now this thread with a very well documented journey of work and code to look at anyway!