Jump to content
  • Advertisement

Imperfect Environment Maps

Hodgman

3713 views

In 22 our lighting environment is dominated by sunlight, however there are many small emissive elements everywhere.

A typical scene from '22'

What we want is for all these bright sunlit metal panels and the many emissive surfaces to be reflected off the vehicles. Being a high speed racing game, we need a technique with minimal performance impacts, and at the same time, we would like to avoid large baked data sets in order to support easy track editing within the game.

This week we got around to trying a technique presented 10 years ago for generating large numbers of shadow maps extremely quickly: Imperfect Shadow maps. In 2008, this technique was a bit ahead of its time -- as indicated by the performance data being measured on 640 x 480 image resolutions at 15 frames per second! :)
It is also a technique for generating shadows, for use in conjunction with a different lighting technique -- Virtual Point Lights.

In 22, we aren't using Virtual Point Lights or Imperfect Shadow Maps! However, in the paper they mention that ISMs can be used to add shadows to environment map lighting... By staying up too late and misreading this section, you could get the idea that you could use the ISM point-cloud rendering ideas to actually generate large numbers of approximate environment maps at low cost... so that's what we implemented :D

Our gameplay code already had access to a point cloud of the track geometry. This data set was generated by simply extracting the vertex positions from the visual mesh of the track - a portion is visualized below:
A portion of the track drawn as a point cloud

Next we somehow need to associate lighting values with each of these points... Typically for static environments, you would use a light baking system for this, which can spend a lot of time path-tracing the scene (or similar), before saving the results into the point cloud. To keep everything dynamic, we've instead taken inspiration from screen-space reflections. With SSR, the existing images that you're rendering anyway are re-used to provide data for reflection rays. We are reusing these images to compute lighting values for the points in our point cloud. After the HDR lighting is calculated, the point cloud is frustum culled and each point projected onto the screen (after a small random offset is applied to it). If the projected point is close in depth to the stored Z-buffer value at that screen pixel, then the lighting value at that pixel is transferred to the point cloud using a moving average. The random offsets and moving average allow many different pixels that are nearby the point to contribute to its color.
An example of the opaque HDR lighting

Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas.

Now, if we take the points that are nearby a particular vehicle and project them onto a sphere, and then unwrap that sphere to 2D UV coordinates (at the moment, we are using a world-space octahedral unwrapping scheme, though spheremaps, hemispheres, etc are also applicable. Using view-space instead of world space could also help hide seams), then we get an image like below. Left is RGB components, right is Alpha, which encodes the solid angle that the point should've covered if we'd actually drawn them as discs/spheres, instead of as points.Nearby points have bright alpha, while distant points have darker alpha.
splat_rgb.png.ac3e423386cedf3beb99e9fa3bf535be.png splat_a.png.8476b2f59d33a092608d417d261baa73.png

We can then feed this data through a blurring filter. In the ISM paper they do a push-pull technique using mipmaps which I've yet to implement. Currently, this is a separable blur weighted by the alpha channel. After blurring, I wanted to keep track of which pixels initially had valid alpha values, so a sign bit is used to keep track of this. Pixels that contain data only thanks to blurring, store negative alpha values in them. Below, left is RGB, middle is positive alpha, right is negative alpha:
blur1.png.fcd78e3a75e0149b618cf599ca268bcf.pngblur1a.png.4eeac739dd5787434cefb756df58ef5c.pngblur1ai.png.c6271914e0ff2505c7f891d798d122c9.png Pass 1 - horizontal

blur2.png.3884b49f7c8801344b53c21a048c2a7c.pngblur2a.png.b7709b3dbaed22168eb6bc1aae28e394.pngblur2an.png.abb6c5e9775061759b699da0263fa7be.png Pass 2 - vertical

blur3.png.71d42f5d354af88f0e096b139b3e5502.pngblur3a.png.e96800a12752c315d62304876c13f225.pngblur3an.png.8e57d231bed81602e146bf483ec1998a.png Pass three - diagonal

blur4.png.2962c027d48af4c7e95ca16862e83c56.png blur4aa.png.c9007144fe7960f5b7b8bd30ab9e7f12.pngPass four - other diagonal, and alpha mask generation

In the final blurring pass, the alpha channel is converted to an actual/traditional alpha value (based on artist-tweakable parameters), which will be used to blend with the regular lighting probes.
A typical two-axis separable blur creates distinctive box shapes, but repeating the process with a 45º rotation produces hexagonal patterns instead, which are much closer to circular :)
The result of this is a very approximate, blobby, kind-of-correct environment map, which can be used for image based lighting. After this step we calculate a mip-chain using standard IBL practices for roughness based lookups.

The big question, is how much does it cost though? On my home PC with a NVidia GTX780 (not a very modern GPU now!), the in-game profiler showed ~45µs per vehicle to create a probe, and ~215µs to copy the screen-space lighting data to the point cloud.
timing.thumb.png.a8b273a288cbe4de03a887f4061771f4.png

And how does it look? When teams capture sections of our tracks, emissive elements show that team's color. Below you can see a before/after comparison, where the green team color is now actually reflected on our vehicles :D

off.thumb.jpg.c744e738e0d088e19e4cbb220399ca4e.jpg

on.thumb.jpg.db2710ca1677096e087eda6478b05484.jpg

 

In those screens you can see the quick artist tweaking GUI on the right side. I have to give a shout out to Omar's Dear ImGui project, which we use to very quickly add these kinds of developer-GUIs.
tweaking.jpg.c6a2325038f857b895204741309578d0.jpg

  • Point Radius - the size of the virtual discs that the points are drawn as (used to compute the pre-blurring alpha value, dictating the blur radius).
  • Gather Radius - the random offset added to each point (in meters) before it's projected to the screen to try and collect some lighting information.
  • Depth Threshold - how close the projected point needs to be to the current Z-Buffer value in order to be able to collect lighting info from that piixel.
  • Lerp Speed - a weight for the moving average.
  • Alpha range - After blurring, scales how softly alpha falls off at the edge of the blurred region.
  • Max Alpha - A global alpha multiplier for these dynamic probes - e.g. 0.75 means that 25% of the normal lighting probes will always be visible.

 



0 Comments


Recommended Comments

Quote

Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas.

I love how much this technique fits your game! You can get so creative with non-general solutions :).

Share this comment


Link to comment
On 8/10/2018 at 8:40 PM, Mussi said:

I love how much this technique fits your game! You can get so creative with non-general solutions :).

Yeah, for something like a first person shooter it wouldn't fit as well, because the point clouds would probably exhibit a lot of light leaking in typical indoor scenes. Our track surfaces are typically full of holes anyway, so a bit of light leaking is actually desirable. I guess you could fight against leaking by aggressively using the push-pull method on the depth buffer, as in the ISM paper. Maybe generate depth maps using the ISM technique first (as a Z-pre-pass) and then generate env-maps using only points that pass the depth test...

Ill try to post another update soon as I refine the technique a bit more, and capture a video of it in motion. As the car races past each point, a blobby reflection passes over the car, which actually really adds to the feeling of speed! :D

Seeing that it's completely dynamic, another enhancement I want to look into is attaching a point cloud onto each vehicle as well, so that the vehicles can reflect off of each other.

Share this comment


Link to comment

Great article and really inspiring.

I too have been reading some old papers, and I think there are interesting ideas that sometimes get forgotten, or not revisited with modern hardware.

Good stuff.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By Shadowsane
      My project started in 2014 but recently ended due to no funds.  AltarisNine was a Minecraft project based on RPG. The concept was nine islands that you explore at a time to follow an in depth lore based on our own production team. This is where the 'Nine' comes in. With skepticism of future success we hope to make this tale into chapters. Such as the first one introducing Nine islands at time.
      It wasn't always the same though, my world did evolve over time and now I have a better idea of what it is better than ever. In the first island, Main Isle, is themed around jungles and wilderness. There's lore that stretches throughout the chapter which will engage the player. There would also be kinds of characters you can be such as any other RPG which could be talked about (because i'm still  about what I have lol)
      My former team was designing a world players would get into interact with in various ways. Boss battles would be minigames and the RPG lore would be engaged in and something indie platforms would enjoy and talk about beyond platforms.
      In the minecraft varient I was a builder, the leader, and the story director which everyone respected. I led my own team of builders and story writers. While I chose certain individuals to be the head department of development and art design.
      The reason I am here is to find a new team to help take this away from minecraft and hope we can be successful about it. I'll happily commute each and every person that volunteers and will be accommodated down the line with promotions, wages, and definitely praised for helping start my dream up.

      Here are some questions that were frequently asked and that I can thoroughly answer:
      What is the goal of the game? If you've ever heard of Wizard101. I got inspired by that game a little. I like the concept of making yourself in this world of mystery and impressing people with new mechanics and events that they enjoy. I'd like for the game to be successful and be mostly on PC but if this keeps up we could reach out to other consoles. But for now, PC, one platform at a time lol. My goal personally is to give people the entertainment and enjoyment I think they'll deserve. Something thats not cheesy, not cliche, something new to keep evolving the gaming community Is this in first-person or third-person? This will be a third person game. We can play around with the camera angles but I kind of want it from a aerial pov I saw RPG in the post so can I assume that the game will have generic RPG elements, e.g. quests, npcs, story-line, items? Yes this will have generic RPG elements. But with a few surprises that make the game different. Such as making boss fights some type of minigame. I don't know how the audience will like or even if it'll flow with game play. But I'd still like to take the idea on for now. Will there be combats, e.g. vs. monsters, vs. players(?) ? There will be tons of concepts. As i've said before the 'Nine' comes in the Nine isles of this world we haven't named yet lol. Each nine islands we come up with will not only give players plenty of content to play, but something we break up into story chapters. Each island will have its on set monsters tied to the story or even monsters that are just natural in their environment. There will also be a PvP aspect which can't be brought up too much because its difficult to try to come up with a player style culture that isn't too predictable or generic or even cliche. I was wondering if it should be an initiated fight or a head on duel like world of warcraft. Is this a single player game or a multiplayer one? Definitely multiplayer. Will the game look like Minecraft? like a voxel/blocks game? I imagined it not looking like minecraft but maybe that can be a concept of its own down the line (like an island concept). I was thinking along the lines of a 3D style and not like minecraft. What are the core mechanics to be included, e.g. player movement, enemy movement, enemy AI? This question is more technical but there will be interactive things in the world, things to collect, natural occurring crafting supplies to make new loot and weapons with. There will be NPC's and thats a broad topic enough lol. I'd even a imagine a pet, housing, and gardening system. But thats for accessories in coding and to give more content in the game for later polishing. Is there a storyline already made? There is an indirect storyline. We've made a script for voice actors (and just what to make the NPC's say in general) in A9 v1. Are there goals already planned out? There are many goals to set out. One each at a time for separate upcoming departments The first 8 pictures were of our hub, the other 9 was our factions world. The factions world doesn't retain to this project I wanted you to see how dedicated I was to making this project. I built everything in the hub myself except for the giant pagodas. The last two photos were all the ones I could find of the RPG world
       




















    • By Octane_Test
      I want to render an ocean where players can change waves’ amplitude in real-time. Initially, I would render rolling waves (see picture). As the amplitude increases, I need to transition the rolling waves into breaking waves (see picture). For now, I am not going to show the shoreline onscreen so I don’t need to render breaking waves interacting with the shoreline; I only need breaking waves on the open ocean.

      I’ve tried three different approaches so far and I’ve only had success with rolling waves using approach 1. Breaking waves have been impossible so far with all three approaches.

      Approach 1: Mesh deformation

      a.     I can create smooth rolling waves using the Sine and Gerstner equations.

      b.     Since I can’t use these equations for breaking waves, I tried to implement them by using this free plugin whose output is similar to this paid mesh deformation plugin. But there are 2 problems with this plugin approach:

      ·      There is no smooth transition between rolling waves generated by approach 1a and the breaking waves generated by the Deform plugin

      ·      The output of the plugin does not look similar to real breaking ocean waves in three different ways:

                                                     i.     No smooth blending with the ocean surface

                                                    ii.     A large depression is created below the crest

                                                  iii.     The entire wave is the same height (rather than with more realistic variations)

      c.      I considered using vertex shaders but this approach seems similar to mesh deformation.

      Approach 2: Fluid dynamics + metaballs

      1.     To render an ocean I will need thousands of particles which will be too expensive in terms of performance (especially for mobile devices).

      Approach 3: Using mesh files

      1.     I can create breaking waves using some 3D software like in this post but then I can’t modify the ocean in real-time. It will be more like a pre-rendered simulation.

      To summarize, I am looking for an approach where I can vary ocean waves’ amplitude for a smooth transition between rolling waves and breaking waves. Please let me know if you have more questions.

    • By bandages
      So, in real life, incoming dot normal at the silhouette is always 0.  With smooth shaded meshes, it never is, not naturally, not outside of contrived situations.  (Not with flat shaded meshes either, I guess.)
      And incoming dot normal is one of the bedrocks of CG.  Probably the equal of 4x4 matrix multiplication.  Problems with silhouette normals show up in Fresnel, in diffuse lighting, in environment mapping....  everywhere.  But I can't really find anybody talking about it.  (Maybe I'm not Googling the right terms.)
      Obviously, the problem decreases as poly count goes up, eventually reaching a point where it's dwarfed by other silhouette problems (like translucency or micro-occlusion) that CG doesn't handle well either.  But, if I'm reasoning correctly, normal maps don't improve the problem-- they're as likely to exacerbate it as improve it, and the exacerbations are, aesthetically speaking, probably worse than the improvements are better.
      I've tried playing with crude fixes-- basically, rotating normals toward incoming by a percentage, or of course clamping incoming dot normal (like we all have to do) to prevent it from bending behind the mesh.  Nothing I've tried looks good.  I suppose the best option might be to rotate normals to perpendicular to incoming at the silhouette and then interpolate to the nearest inflection point  of something like screen space depth to preserve curvature, but the math for how to do that is beyond me, and I'm not sure it would look any better.  Or maybe, instead, somehow, adjust the drawn silhouette to match the silhouette defined by incoming dot normal?  Not even sure how that would work, not if the normal was pointing away from incoming.
      I don't know-- is this a solvable problem?  Has anyone tried other stuff and given up, pursued anything that was promising but too expensive, anything like that?  Are there any papers I'm missing?  It's really surprising to me that I can't find anyone else talking about this.
      (Apologies if I chose the wrong subforum for this.  I considered art forums, but I felt that people frequenting the programming forums would have more to say on the subject.)
    • By vinibiavatti
      Hi there! I have one issue for now. I'm creating a RayCasting application, and for my floor and ceiling I'm trying to use Mode7 for rendering (I think this is easier to understand). but, I cant align the RayCasting walls with the mode7 floor. I use a rotate matrix to make the rotation of floor. Do you know what a need to think in the implementation to fix that? Or do you know if there is some tutorial explaining about it? Thanks!!! (Check the image below for understand)

      Here is my mode7 code:
      function mode7() { let _x = 0; let _y = 0; let z = 0; let sin = Math.sin(degreeToRadians(data.player.angle)); let cos = Math.cos(degreeToRadians(data.player.angle)); for(let y = data.projection.halfHeight; y < data.projection.height; y++) { for(let x = 0; x < data.projection.width; x++) { _x = ((data.projection.width - x) * cos) - (x * sin); _y = ((data.projection.width - x) * sin) + (x * cos); _x /= z; _y /= z; if(_y < 0) _y *= -1; if(_x < 0) _x *= -1; _y *= 8.0; _x *= 8.0; _y %= data.floorTextures[0].height; _x %= data.floorTextures[0].width; screenContext.fillStyle = data.floorTextures[0].data[Math.floor(_x) + Math.floor(_y) * data.floorTextures[0].width]; screenContext.fillRect(x, y, 1, 1); } z += 1; } }  
    • By DiligentDev
      The latest release of Diligent Engine combines a number of recent updates (Vulkan on iOS, GLTF2.0 support, shadows), significantly improves performance of OpenGL backend, updates API, adds integration with Dear Imgui and implements new samples and tutorials. Some of the new features in this release:
      GLTF2.0 support (loader, PBR renderer and sample viewer) Shadowing Component and Shadows Sample Integration with Dear Imgui library and Dear Imgui demo Tutorial13 - Shadow Map Tutorial14 - Compute Shader Tutorial15 - Multiple Windows Check it out on GitHub.
        
       
       
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!