Jump to content
  • Advertisement

Imperfect Environment Maps

Hodgman

2613 views

In 22 our lighting environment is dominated by sunlight, however there are many small emissive elements everywhere.

A typical scene from '22'

What we want is for all these bright sunlit metal panels and the many emissive surfaces to be reflected off the vehicles. Being a high speed racing game, we need a technique with minimal performance impacts, and at the same time, we would like to avoid large baked data sets in order to support easy track editing within the game.

This week we got around to trying a technique presented 10 years ago for generating large numbers of shadow maps extremely quickly: Imperfect Shadow maps. In 2008, this technique was a bit ahead of its time -- as indicated by the performance data being measured on 640 x 480 image resolutions at 15 frames per second! :)
It is also a technique for generating shadows, for use in conjunction with a different lighting technique -- Virtual Point Lights.

In 22, we aren't using Virtual Point Lights or Imperfect Shadow Maps! However, in the paper they mention that ISMs can be used to add shadows to environment map lighting... By staying up too late and misreading this section, you could get the idea that you could use the ISM point-cloud rendering ideas to actually generate large numbers of approximate environment maps at low cost... so that's what we implemented :D

Our gameplay code already had access to a point cloud of the track geometry. This data set was generated by simply extracting the vertex positions from the visual mesh of the track - a portion is visualized below:
A portion of the track drawn as a point cloud

Next we somehow need to associate lighting values with each of these points... Typically for static environments, you would use a light baking system for this, which can spend a lot of time path-tracing the scene (or similar), before saving the results into the point cloud. To keep everything dynamic, we've instead taken inspiration from screen-space reflections. With SSR, the existing images that you're rendering anyway are re-used to provide data for reflection rays. We are reusing these images to compute lighting values for the points in our point cloud. After the HDR lighting is calculated, the point cloud is frustum culled and each point projected onto the screen (after a small random offset is applied to it). If the projected point is close in depth to the stored Z-buffer value at that screen pixel, then the lighting value at that pixel is transferred to the point cloud using a moving average. The random offsets and moving average allow many different pixels that are nearby the point to contribute to its color.
An example of the opaque HDR lighting

Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas.

Now, if we take the points that are nearby a particular vehicle and project them onto a sphere, and then unwrap that sphere to 2D UV coordinates (at the moment, we are using a world-space octahedral unwrapping scheme, though spheremaps, hemispheres, etc are also applicable. Using view-space instead of world space could also help hide seams), then we get an image like below. Left is RGB components, right is Alpha, which encodes the solid angle that the point should've covered if we'd actually drawn them as discs/spheres, instead of as points.Nearby points have bright alpha, while distant points have darker alpha.
splat_rgb.png.ac3e423386cedf3beb99e9fa3bf535be.png splat_a.png.8476b2f59d33a092608d417d261baa73.png

We can then feed this data through a blurring filter. In the ISM paper they do a push-pull technique using mipmaps which I've yet to implement. Currently, this is a separable blur weighted by the alpha channel. After blurring, I wanted to keep track of which pixels initially had valid alpha values, so a sign bit is used to keep track of this. Pixels that contain data only thanks to blurring, store negative alpha values in them. Below, left is RGB, middle is positive alpha, right is negative alpha:
blur1.png.fcd78e3a75e0149b618cf599ca268bcf.pngblur1a.png.4eeac739dd5787434cefb756df58ef5c.pngblur1ai.png.c6271914e0ff2505c7f891d798d122c9.png Pass 1 - horizontal

blur2.png.3884b49f7c8801344b53c21a048c2a7c.pngblur2a.png.b7709b3dbaed22168eb6bc1aae28e394.pngblur2an.png.abb6c5e9775061759b699da0263fa7be.png Pass 2 - vertical

blur3.png.71d42f5d354af88f0e096b139b3e5502.pngblur3a.png.e96800a12752c315d62304876c13f225.pngblur3an.png.8e57d231bed81602e146bf483ec1998a.png Pass three - diagonal

blur4.png.2962c027d48af4c7e95ca16862e83c56.png blur4aa.png.c9007144fe7960f5b7b8bd30ab9e7f12.pngPass four - other diagonal, and alpha mask generation

In the final blurring pass, the alpha channel is converted to an actual/traditional alpha value (based on artist-tweakable parameters), which will be used to blend with the regular lighting probes.
A typical two-axis separable blur creates distinctive box shapes, but repeating the process with a 45º rotation produces hexagonal patterns instead, which are much closer to circular :)
The result of this is a very approximate, blobby, kind-of-correct environment map, which can be used for image based lighting. After this step we calculate a mip-chain using standard IBL practices for roughness based lookups.

The big question, is how much does it cost though? On my home PC with a NVidia GTX780 (not a very modern GPU now!), the in-game profiler showed ~45µs per vehicle to create a probe, and ~215µs to copy the screen-space lighting data to the point cloud.
timing.thumb.png.a8b273a288cbe4de03a887f4061771f4.png

And how does it look? When teams capture sections of our tracks, emissive elements show that team's color. Below you can see a before/after comparison, where the green team color is now actually reflected on our vehicles :D

off.thumb.jpg.c744e738e0d088e19e4cbb220399ca4e.jpg

on.thumb.jpg.db2710ca1677096e087eda6478b05484.jpg

 

In those screens you can see the quick artist tweaking GUI on the right side. I have to give a shout out to Omar's Dear ImGui project, which we use to very quickly add these kinds of developer-GUIs.
tweaking.jpg.c6a2325038f857b895204741309578d0.jpg

  • Point Radius - the size of the virtual discs that the points are drawn as (used to compute the pre-blurring alpha value, dictating the blur radius).
  • Gather Radius - the random offset added to each point (in meters) before it's projected to the screen to try and collect some lighting information.
  • Depth Threshold - how close the projected point needs to be to the current Z-Buffer value in order to be able to collect lighting info from that piixel.
  • Lerp Speed - a weight for the moving average.
  • Alpha range - After blurring, scales how softly alpha falls off at the edge of the blurred region.
  • Max Alpha - A global alpha multiplier for these dynamic probes - e.g. 0.75 means that 25% of the normal lighting probes will always be visible.

 



0 Comments


Recommended Comments

Quote

Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas.

I love how much this technique fits your game! You can get so creative with non-general solutions :).

Share this comment


Link to comment
On 8/10/2018 at 8:40 PM, Mussi said:

I love how much this technique fits your game! You can get so creative with non-general solutions :).

Yeah, for something like a first person shooter it wouldn't fit as well, because the point clouds would probably exhibit a lot of light leaking in typical indoor scenes. Our track surfaces are typically full of holes anyway, so a bit of light leaking is actually desirable. I guess you could fight against leaking by aggressively using the push-pull method on the depth buffer, as in the ISM paper. Maybe generate depth maps using the ISM technique first (as a Z-pre-pass) and then generate env-maps using only points that pass the depth test...

Ill try to post another update soon as I refine the technique a bit more, and capture a video of it in motion. As the car races past each point, a blobby reflection passes over the car, which actually really adds to the feeling of speed! :D

Seeing that it's completely dynamic, another enhancement I want to look into is attaching a point cloud onto each vehicle as well, so that the vehicles can reflect off of each other.

Share this comment


Link to comment

Great article and really inspiring.

I too have been reading some old papers, and I think there are interesting ideas that sometimes get forgotten, or not revisited with modern hardware.

Good stuff.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By Ricardo3Ddev
      Hi guys! Our first game (Dongo Adventure) has just been released on Steam! I hope you have fun with the game! O/
      The game was produced by me and my brother for 11 months. We did everything with free and open source software (Blender 3D and Gimp).
      About the game: The game is 3D Platform style, inspired by the classic platform games (mainly 'Donkey Kong Country 2'), bringing all those challenges and fun of the genre. Thank you all for the support! 
      Steam Game Page: http://store.steampowered.com/app/811450/Dongo_Adventure/
      Official Trailer: 
       

    • By isu diss
      This post is about continuation of https://www.gamedev.net/forums/topic/699032-3d-rigidbody-simulation/. I have setup the Collision Detection and Collision Response for the cricket ball. But the ball doesn't bounce off the ground. After a bit of debugging, I've found that the impulse, generated when the ball bounces off the ground is very small. What should I do to make this right(right impulse)?
      Rigidbody.cpp XMVECTOR RigidBody::GetVelocityAtPoint(XMVECTOR p) { return (v + XMVector3Cross(Omega, (p - x))); } XMMATRIX RigidBody::GetIInverse() { return IInverse; } ...... CollisionResponse.cpp enum CollidingType { None = -1, MoveAway = 0, Resting = 1, Collide = 2 }; struct Contact { RigidBody *a, *b; XMVECTOR p, n; }; CollidingType VerifyTypeOfColliding(Contact *c) { XMVECTOR padot = c->a->GetVelocityAtPoint(c->p); XMVECTOR pbdot = c->b->GetVelocityAtPoint(c->p); XMVECTOR vrel = XMVector3Dot(c->n, (padot - pbdot)); if (vrel.m128_f32[0] > 0) return MoveAway; else if (vrel.m128_f32[0] == 0) return Resting; else if (vrel.m128_f32[0] < 0) return Collide; return None; } void CollisionResponse(Contact *c, float epsilon) { XMVECTOR padot = c->a->GetVelocityAtPoint(c->p); XMVECTOR pbdot = c->b->GetVelocityAtPoint(c->p); XMVECTOR n = c->n; XMVECTOR ra = (c->p - c->a->GetPosition()); XMVECTOR rb = (c->p - c->b->GetPosition()); XMVECTOR vrel = XMVector3Dot(c->n, (padot - pbdot)); float numerator = (-(1 + epsilon)*vrel.m128_f32[0]); float term1 = (1 / c->a->GetMass()); float term2 = (1 / c->b->GetMass()); XMVECTOR term3 = XMVector3Dot(c->n, XMVector3Cross(XMVector4Transform(XMVector3Cross(ra, n), c->a->GetIInverse()), ra)); XMVECTOR term4 = XMVector3Dot(c->n, XMVector3Cross(XMVector4Transform(XMVector3Cross(rb, n), c->b->GetIInverse()), rb)); float j = (numerator / (term1 + term2 + term3.m128_f32[0] + term4.m128_f32[0])); XMVECTOR f = (j*n); c->a->AddForce(f); c->b->AddForce(-f); c->a->AddTorque(XMVector3Cross(ra, f)); c->b->AddTorque(-XMVector3Cross(rb, f)); } ..... Collision Detection // BS - BoundingSphere class & Plane- Normal Plane class bool SpherePlaneIntersection(BS *CricketBall, Plane *CricketGround, Contact *c) { float dist = XMVector3Dot(XMLoadFloat3(&CricketBall->GetCenter()), XMLoadFloat3(&CricketGround->GetNormal())).m128_f32[0] - CricketGround->GetOffset(); c->a = rbBall; c->b = rbGround; if ((dist) <= CricketBall->GetRadius()) { c->n = XMLoadFloat3(&CricketGround->GetNormal()); c->p = XMLoadFloat3(&CricketBall->GetCenter()) - dist * XMLoadFloat3(&CricketGround->GetNormal()); return true; } else return false; return false; } ..... In the Rendering loop code Contact CBwithCG; if (SpherePlaneIntersection(cdBall, cdGround, &CBwithCG)) { if (VerifyTypeOfColliding(&CBwithCG) == Resting) { rbBall->AddForce(XMVectorSet(0, CB_Mass*g, 0, 0)); } else if (VerifyTypeOfColliding(&CBwithCG) == Collide) { CollisionResponse(&CBwithCG, .5f); } } else rbBall->AddForce(XMVectorSet(0, -CB_Mass*g, 0, 0));  
    • By Entex
      printf("Hello %s", "follow game makers!");
      I'm about to make a game engine, and looking for advice!
      Background
      I have a game engine project I've been thinking about A LOT lately. I know it's gonna be hard, in fact the most challanging programming challange i can think of. With that said, I'm willing to put down the time and effort.
      I'm looking for some advice to keep the project up and running and I'm asking you, game makers! I've so much passion about this project. I've tried making a game engines before, but they have all halted. Some failed because of lack of focus, some failed because of unorganised structure, some failed because of lack of experiance, too big scope, unclear destination... you get the point.
      Now I'm taking a different approach. I'm doing the boring part, pre-planning, researching, etc. That's partly why I'm here, asking you. I'll lay out my plan for you guys. 
      Prerequisites
      I'm gonna try to keep technical terms to a minimum.
      So no spoiling what graphical API's or libraries I'm going to use, that's just asking for political warfare. This is more about the project management, avoiding pitfalls and such.
      The engine is gonna be a 2D engine. When i feel finished (probably in a couple of years) I will expand to 3D, but that's for another time.
      Because it's a game engine it should handle any type of 2D game, sidescrolling, top-down, hell even click-adventures!
      Disclaimer
      Sorry if my english is a bit wacky. Don't judge!
      The Game list(You'll read about it soon.) is just for experience purpose. I don't wanna fall in any kind of legal action because i stole some idea and thus only for personal use. My own ÜBER-awesome-final-game, if ever completed, will be released to the public.
      I first posted this on stackoverflow and was shutdown pretty hard because of too broad topic, understandable. Hoping this is the right forum, I'm just looking for some friendly advice. Kinda hard to get on this internet thingamabob...
      The Plan
      Start simple, work my way towards a more and more advanced game engine. In the end and my long term goal is my very own advanced 2D game(of course built on my engine). As a bonus, I might release the sourcecode of the game engine if I'm happy how it turned out.
      I believe in short term goal too keep my motivation and the feel of progress. But also have major goals to strive for, too always keep myself challanged, get bits and pieces to be proud of and most important have something to look forward to.
      Some of my older tries failed because i lost focus and stopped coding for a while. This time around i think it's best to atleast get a few lines of code every week. My "average goal" is to code for atleast a couple of hours every weekend. Just so i don't stop coding, the worst pitfall (i think).
      My strategy is a list of games to make on my journey. Trying to always have the list as a unit testing tool (Surely I'll have to redo older games when my engine gets up to speed). The list looks a bit like this. 
      Game list, Major hits
      1. Pong
      2. 1 Level platformer (Extremly restricted)
      3. Extended 1 level platformer with screenscrolling, jumping, etc.
      4. Same level with added Sprite/Animation.
      5. Same level with Goomba-like enemies and a finish line.
      6. Multiple levels!
      7. Super Major. A complete, short, single player mario-like game, with different enemies, levels and of course a boss.
      8. Top down 2D game. Bomberman-like
      9. Bomberman-like multiplayer
      10. ... This goes on for a while. Some smaller games, some Super Major

      Smaller technical milestones to start with (I know i said "no technical talk", but this is the extent of it)
      101. Graphical window (Ok, it's not a game but i have to start somewhere right?)
      102. Draw a triangle [Draw objects]
      103. Pong, very hardcoded (No help from the game engine to make collision or so)
      First game PONG
      201. Textures
      202. Simple physics (gravity, friction) collision
      203. Player Controller
      204. ...
      First Platformer: Have a 1 Level platformer were i can jump onto objects and stuff. No enemies, no screenscrolling. Just a super simple platformer.
      301. Animation
      302. Add Screenscrolling
      303. Static enemies
      304. Super simple AI. (Move toward player)
      305. ... Keep on adding so i can complete my list of games
      This is of course not the full list, i just don't want to TL;DR.. If you are still here, you are the GREATEST!
      Some concerns 

      The more I complete games on my list, the longer it will take to complete the next one. The more powerful function, the longer it will take. Multiplayer for instance, is no easy task...
      ADVICE
      Am i on the right track?
      Would you do something different?
      What do you think will work, what is risky?
      Have you tried making a game engine yourself? What kind of pitfalls did you encounter?
    • By lukash
      Hello everybody! I decided to write a graphics engine, the killer of Unity and Unreal. If anyone interested and have free time, join. High-level render is based on low-level OpenGL 4.5 and DirectX 11. Ideally, there will be PBR, TAA, SSR, SSAO, some variation of indirect light algorithm, support for multiple viewports and multiple cameras. The key feature is COM based (binary compatibility is needed). Physics, ray tracing, AI, VR will not. I grabbed the basic architecture from the DGLE engine. The editor will be on Qt (https://github.com/fra-zz-mer/RenderMasterEditor). Now there is a buildable editor. The main point of the engine is the maximum transparency of the architecture and high-quality rendering. For shaders, there will be no new language, everything will turn into defines.
    • By Doggolainen
      Hi,

      I wish to generate light map textures for objects in my scene containing ambient occlusion and global illumination.

      Currently I can do this in 3D Max, but it is taking too long on my single computer for this to be feasible. 

      Does anyone know any rendering farms that supports this (light baking)? I checked "Rebusfarm" and they did not support this feature.

      Any help appreciated,
      Doggolainen
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!