• Advertisement

3D Blending local-global envmaps

Recommended Posts

In the past I developed a reflection probe/environment map system with PBR pipeline in mind. It had support for global and local environment maps. The locals only affect the scene inside their boundaries, which are OBBs essentially. There can be several local probes placed inside the scene and they are blended on top of each other (if they are intersecting). In a deferred renderer they would be rendered as boxes and they would sample from the Gbuffer the same time as lights are rendered. If there are only local probes, then there could be areas where reflection information is missing, which is undesirable with PBR rendering, as metals not receive any color. In those areas we should fall back to something, but here comes my question: what? I experimented with some solutions but I found them not really appealing:

  1. Fall back to sky color: This could work in outside areas, but indoors it will just break hard.
  2. Fall back to the probe closest to the camera: With some blending it could work so that it avoids "popping", but far away reflections will also change with the camera position, can be very distracting.
  3. Fall back to the probe closest to pixel world position: Has several problems
    1. How to determine per pixel which probe is closest?
    2. We should really retrieve 3 closest probes and blend them
    3. But 3 cubemap samples, distances, blending maybe even in object rendering shader?
    4. Maybe use 2 closest probes to camera and use those to blend? This produces straight line between affecting boundaries, but will result in popping when new probe gets close to camera which wasn't.
  4. Fall back to closest cubemap per object? Seems nice on static objects, but this can also break easily.

Does anyone have other solutions that they use? I would like to have a general solution to this problem.

Share this post


Link to post
Share on other sites
Advertisement

Can you, instead, ensure each piece of surface is affected by at least one probe? This way you could solve the problem offline by extending probe volumes.

3.2.: I think to do this in a robust manner, you need 4 closest probes from a voroni tetrahedralization. But this approach could also replace your current OBB approach completely, so not just a 'fallback'.

Eventually build a very low resolution volume grid of pointers to existing probes?

 

Share this post


Link to post
Share on other sites
39 minutes ago, JoeJ said:

Can you, instead, ensure each piece of surface is affected by at least one probe? This way you could solve the problem offline by extending probe volumes.

3.2.: I think to do this in a robust manner, you need 4 closest probes from a voroni tetrahedralization. But this approach could also replace your current OBB approach completely, so not just a 'fallback'.

Eventually build a very low resolution volume grid of pointers to existing probes?

 

Thank you for ideas. I wanted to avoid using many local probes to fill every surface, because they are heavier to compute than global probes (they need a ray-trace into OBB in pixel shader). The parallax effect which they bring and why I even use local probes is hardly visible on rough surfaces, so those would benefit in having just a global probe computed for them.

I also want to avoid offline methods, I want something fully real time (this is for a hobby project). But a grid of probe pointers sounds like a neat idea. I just implemented a new system for them and storing the probes inside a texturecubearray, so indexing would be easy even in a forward+ rendering object shader. A problem with a regular grid is that the local probes are projected as boxes so the box sides would be visible in the reflections and would be distracting if the probes don't fit the room. I'll be toying with this idea though.

Share this post


Link to post
Share on other sites

Yeah as above you can make it into an art/content problem :)

In development builds, render any pixel not covered by a probe in flashing pink so that content creators can see the error. 

You can make the OBB/parallax correction feature optional, to allow "localised global" probes. You might bathe a whole building in a non-parallax probe, then add a few parallax probes to important rooms only. 

Going in other directions though, you can fall back to other data sets besides probes. If you have lightmaps, you can fall back to them (I've done runtime lightmap baking on a PS3/360 game once :) ), or AO bakes tinted with ambient colours. We've often defined ambience on a spherical domain with three colours - up/side/down, weighted by sat(n.y), 1-abs(n.y) and sat (1-n.y), where (n=world normal and y=up). You could define these in particular regions the same way that you define your probes currently, for cases where an artist wants to fix the global sky leaking in, but doesn't want to add the cost of another runtime probe. 

Share this post


Link to post
Share on other sites
14 hours ago, turanszkij said:

I also want to avoid offline methods, I want something fully real time

But the probe positions are still static data i guess? What limitations could you expect from an offline method? 

You could use a voxelization of the scene, flag voxels inside probe OOBs and use remaining unlit voxels to extend closest probe OOB. You already have voxelization, and it could be realtime or progressively updated if really needed.

Share this post


Link to post
Share on other sites
18 hours ago, knarkowicz said:

In your specific case I would go with a very simple solution:

1. Set of global probes. Nearest one covers entire scene.

2. On top of that blend your local probes.

I mentioned that approach, what I dislike about it is that far away objects will have a very wrong reflection, and also when the closest envmap changes the entire scene gets re-lighted. But I will probably go with this one as this method can be implemented with no hard popping when a new envmap gets closest.

12 hours ago, JoeJ said:

But the probe positions are still static data i guess? What limitations could you expect from an offline method? 

You could use a voxelization of the scene, flag voxels inside probe OOBs and use remaining unlit voxels to extend closest probe OOB. You already have voxelization, and it could be realtime or progressively updated if really needed.

The probe locations are mostly static, but they can be grabbed in the editor and moved, and be refreshed instantly. About the voxelization, that is an interesting idea. Though I would rather go the Remedy way then, which is placing a bunch of probes in relevant spots automatically with the help of a voxel grid. Will think about your idea, maybe try to implement it as it sounds like an easy extension to voxel GI which I played around with recently.

On ‎30‎/‎01‎/‎2018 at 9:06 PM, Hodgman said:

Yeah as above you can make it into an art/content problem :)

No :D (In this case I would be delegating the problem to myself as probably only I will ever use this engine :) )

On ‎30‎/‎01‎/‎2018 at 9:06 PM, Hodgman said:

You can make the OBB/parallax correction feature optional, to allow "localised global" probes. You might bathe a whole building in a non-parallax probe, then add a few parallax probes to important rooms only. 

I want to do that, basically I just wondered how to blend between the global probes once "I leave the building". Because say that when I exit the door, we want to switch probes, now the outside environment will have the inside envmap for some time and the whole scene will blend abruptly. Btw which game did you bake lightmaps at runtime?

Share this post


Link to post
Share on other sites

Remedy also had voxelized pointers towards which probes are relevant where. Heck you could go a step further (or does Remedy do this already) and store a SH probe, with channels pointing towards the relevant probes to blend. It'd be great for windows and the like, blending relevant outdoor probes would be great there.

You could even make the entire system realtime, or near to it. Infinite Warfare used deferred probe rendering for realtime GI, and Shadow Warrior 2 had procedurally generated levels lit at creation time. I seriously hope those are the right links, I'm on a slow public wifi at the moment so...

Regardless a nice trick is to use SH probes with say, ambient occlusion info, or static lighting info or something, to correct cubemap lighting. This way you can use cubemaps for both spec and diffuse, and then at least somewhat correct it later.

Share this post


Link to post
Share on other sites
4 hours ago, turanszkij said:

I mentioned that approach, what I dislike about it is that far away objects will have a very wrong reflection, and also when the closest envmap changes the entire scene gets re-lighted. But I will probably go with this one as this method can be implemented with no hard popping when a new envmap gets closest.

In my solution global env maps are separated from the local ones. Global ones should capture mostly sky and generic features, and be used very sparsely (few per entire level). This way it's enough to blend just two of those to have perfect transitions and far away reflections will look fine. I actually used this system for Shadow Warrior 2, just with a small twist - probes were generated and cached in real-time. If you are interested you can check out some slides with notes: “Rendering of Shadow Warrior 2”.

Share this post


Link to post
Share on other sites
22 hours ago, turanszkij said:

No :D (In this case I would be delegating the problem to myself as probably only I will ever use this engine :) )

You can still make it into a problem of manual labour per scene (a level editor task) instead of an algorithmic challenge :)

Either way you're going to find lighting bugs in the level editor so the difference is whether that prompts you to go and fix the code or massage the lighting data in the editor to hide the bug! 

22 hours ago, turanszkij said:

I want to do that, basically I just wondered how to blend between the global probes once "I leave the building". Because say that when I exit the door, we want to switch probes, now the outside environment will have the inside envmap for some time and the whole scene will blend abruptly. Btw which game did you bake lightmaps at runtime?

I was suggesting to have one truly global probe, but then use large non-parallax local probes to override it in areas (such as a whole building), and then even smaller local probes to override rooms within the buildings. You'd define a soft falloff at the edge of each local probe region for blending, and results aren't based on the camera position. 

I did lightmap baking on Don Bradman Cricket 14 (PC, PS3, 360 edition, not used in the PS4/Xbone edition though). Bakes were budgeted 1ms of GPU per frame during gameplay or 30ms of GPU per frame on loading screens. A bake took about 3 minutes during gameplay, though we also had a low quality setting if we needed one quicker. So, not useful for dynamic lights from explosions/etc, but perfectly fine for dynamic time of day. Sports games also feature lots of camera cuts (e.g. After a football player is tackled,  or a goal is scored, or before a bowler bowls in cricket) so we would wait for one of these camera cuts before switching out the old lightmap with the newest bake, so the change couldn't be noticed :)

Share this post


Link to post
Share on other sites
15 hours ago, Hodgman said:

I was suggesting to have one truly global probe, but then use large non-parallax local probes to override it in areas (such as a whole building), and then even smaller local probes to override rooms within the buildings. You'd define a soft falloff at the edge of each local probe region for blending, and results aren't based on the camera position. 

I like this. For now I already decided on a single global envmap as I can't stand the sight of the whole scene lighting blending from one envmap to another. Doing local probes with the parallax turned off seems like a very easy solution.
Also the real time lightmap baking tech sounds insane but in a very cool way. :) 

On ‎01‎/‎02‎/‎2018 at 2:27 AM, knarkowicz said:

In my solution global env maps are separated from the local ones. Global ones should capture mostly sky and generic features, and be used very sparsely (few per entire level). This way it's enough to blend just two of those to have perfect transitions and far away reflections will look fine. I actually used this system for Shadow Warrior 2, just with a small twist - probes were generated and cached in real-time. If you are interested you can check out some slides with notes: “Rendering of Shadow Warrior 2”.

I just watched your presentation on YouTube, very good stuff! And I have that game though never tried it, now I will for sure. :) 
So your global probes were not part of your "prefabs" (I mean level pieces)? And did you blend between two globals based on world space position of the pixel?

Share this post


Link to post
Share on other sites
10 hours ago, turanszkij said:

I just watched your presentation on YouTube, very good stuff! And I have that game though never tried it, now I will for sure. :) 
So your global probes were not part of your "prefabs" (I mean level pieces)? And did you blend between two globals based on world space position of the pixel?

Thanks! Global ones weren't attached to prefabs - they were attached to "levels". Usually there were only 1-3 global envs per 250x250m level chunk. I don't remember if we shipped with any blending at all or just with fade to black current global env, switch and fade in new one. Anyway, idea was to blend globally, so just a quick time based blend, without any kind of world space position or per pixel operations. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By _RoboCat_
      Hi,
      Can anyone point me into good direction how to resolve this?
      I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
      What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
      i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
      is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
      Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
      No need for whole code, just if someone can point me in good direction would be nice.
      Thanks


    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement