Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Aug 2002
Offline Last Active Yesterday, 04:32 PM

Posts I've Made

In Topic: Shader Performances

04 September 2016 - 05:24 PM

I cannot modify the models automatically without having the artists to take a look. And doing it for a massive amount of

data is out of our bought).

That's what a build pipeline is for - you modify the models when they are imported without changing the source assets. That's not really any different to calculating different uvs in the shader when those UVs are calculated in a way that do not change based on other factors in the game.

You mentioned Unity, so I'll point out that if you are doing mobile games then multiple texture samples gets expensive very quickly based on the target devices. Older Android devices can be especially bad for this. Older mobile devices also struggle if you calculate the texture coordinates in the pixel shader. Blending is also a major bottleneck on mobile. If you are doing PC though, these points don't apply as the hardware is very different.


Its pretty important that you take the time to construct performance tests, even if you hand write them yourself.

In Topic: Problem with water reflection

04 September 2016 - 05:17 PM

You're not supposed to render the water with the mirrored matrix, instead you're supposed to render the reflected scene (i.e. everything above the water plane) using the mirrored matrix into a texture, then use that texture on the water (using screenspace uvs)


*edit: typo

In Topic: Data-oriented scene graph?

12 May 2016 - 12:27 AM


One of the issues I ran into while building a scenegraph like that was the pure index-based approach works great for static graphs such as a skeleton mesh but becomes a headache for dynamic graphs :(


I ended up building a graph structure ontop of the ID/indirection container Sean suggested here (http://gamedev.stackexchange.com/questions/33888/what-is-the-most-efficient-container-to-store-dynamic-game-objects-in) but perhaps there's a better way to do it.



Something i've done before is that each object (typically static in its own design) is its own data-oriented array, but that array could be parented to any particular index in another object's data-oriented array. This allowed for the transform code to be parallelized by objects based on a very simple dependency model.


I took it a step further so that each bone in the array could be parented to a different bone in the parents objects array, but limited to having one parent array. That kept the dependency model simple but with enough flexibility to do for example, skinning-only transforms by having every bone in a skeleton parented to the matching bone in the animated skeleton.


EDIT: This also allowed for each of the flat arrays to be inserted into an acceleration structure under a single spatial hash, instead of having to test each node within it.

In Topic: Scalable Ambient Obscurance and Other algorithms

11 May 2016 - 12:14 AM

BTW how do you store result of backing calculation? Objects texture's uv can't be reused since an object can appear at different location with different lighting condition, and some texel can be used for several surface.
On the other hand flattening polygon in a 2d plane is though due discontinuity at polygon edge, having to optimize texture space while keeping surface size equivalent...

Typically with a separate UV set for lightmap UVs (to support having a different texel density, and also to avoid issues where UV regions are reused for texturing) and then a scale/bias to look the individual object up within a larger atlas passed in as either a per-instance attribute or a shader constant.

In Topic: Struggling with Cascaded Shadow Mapping

29 March 2016 - 04:09 PM

Not sure if ithat's entirely how it's supposed to work but at least it looks sensible.

That's pretty much it. There's a little bit of work in tuning it to get rid of artifacts between cascades etc and stabilization, but otherwise the technique is rather simple as you described. If you have framerate issues with recalculating every cascade every frame, you can always stagger updates of the medium/far cascades too.


From a tuning point of view it really is dependent on the type of scene you are rendering, for example distances aren't a good metric if your game design suddenly calls for a very tight FOV for zooming (as i discovered the hard way when our artists were given complete control over cutscenes)

Depending on how large your scene is, it might be worthwhile to use a different shadowing technique for the furthest detail, then composite that in with the cascade shadows. For example, using calculating ESM for a mountainous terrain, while using CSM w/PCF for all of the objects on the terrain.