Jump to content

  • Log In with Google      Sign In   
  • Create Account

Digitalfragment

Member Since 29 Aug 2002
Offline Last Active Jul 20 2016 05:15 PM

Posts I've Made

In Topic: Data-oriented scene graph?

12 May 2016 - 12:27 AM

 

One of the issues I ran into while building a scenegraph like that was the pure index-based approach works great for static graphs such as a skeleton mesh but becomes a headache for dynamic graphs :(

 

I ended up building a graph structure ontop of the ID/indirection container Sean suggested here (http://gamedev.stackexchange.com/questions/33888/what-is-the-most-efficient-container-to-store-dynamic-game-objects-in) but perhaps there's a better way to do it.

 

 

Something i've done before is that each object (typically static in its own design) is its own data-oriented array, but that array could be parented to any particular index in another object's data-oriented array. This allowed for the transform code to be parallelized by objects based on a very simple dependency model.

 

I took it a step further so that each bone in the array could be parented to a different bone in the parents objects array, but limited to having one parent array. That kept the dependency model simple but with enough flexibility to do for example, skinning-only transforms by having every bone in a skeleton parented to the matching bone in the animated skeleton.

 

EDIT: This also allowed for each of the flat arrays to be inserted into an acceleration structure under a single spatial hash, instead of having to test each node within it.


In Topic: Scalable Ambient Obscurance and Other algorithms

11 May 2016 - 12:14 AM

BTW how do you store result of backing calculation? Objects texture's uv can't be reused since an object can appear at different location with different lighting condition, and some texel can be used for several surface.
On the other hand flattening polygon in a 2d plane is though due discontinuity at polygon edge, having to optimize texture space while keeping surface size equivalent...

Typically with a separate UV set for lightmap UVs (to support having a different texel density, and also to avoid issues where UV regions are reused for texturing) and then a scale/bias to look the individual object up within a larger atlas passed in as either a per-instance attribute or a shader constant.


In Topic: Struggling with Cascaded Shadow Mapping

29 March 2016 - 04:09 PM


Not sure if ithat's entirely how it's supposed to work but at least it looks sensible.


That's pretty much it. There's a little bit of work in tuning it to get rid of artifacts between cascades etc and stabilization, but otherwise the technique is rather simple as you described. If you have framerate issues with recalculating every cascade every frame, you can always stagger updates of the medium/far cascades too.

 

From a tuning point of view it really is dependent on the type of scene you are rendering, for example distances aren't a good metric if your game design suddenly calls for a very tight FOV for zooming (as i discovered the hard way when our artists were given complete control over cutscenes)

Depending on how large your scene is, it might be worthwhile to use a different shadowing technique for the furthest detail, then composite that in with the cascade shadows. For example, using calculating ESM for a mountainous terrain, while using CSM w/PCF for all of the objects on the terrain.


In Topic: Struggling with Cascaded Shadow Mapping

28 March 2016 - 09:56 PM

Wouldn't that just be the visual side effect caused by the dimensions of the camera frustum changing relative to the shadow direction?
If you are facing toward/away from the light, the frustum will yield a smaller min/max. If you are facing perpendicular, it will be much larger.
 


In Topic: Struggling with Cascaded Shadow Mapping

28 March 2016 - 06:49 PM

The point behind cascading shadows is to have multiple cascades, but you are on the right track with your math. The next bit is to split the camera frustum along its z-depth. This yields a smaller frustum closer to the camera, and a larger frustum further away. Render out a shadow map for each cascade, and when sampling, take the sample from the first cascade that the pixel falls inside of.

There was a presentation by Crytek on Ryse thats worth checking out.

Edit:

regarding the shimmering, you want to do rounding on the mins/maxes for stabilization based on the resolution of the shadowmap - basically only ever move the shadow projection in multiples of the size of a pixel.


PARTNERS