Jump to content
  • Advertisement

Degi

Member
  • Content Count

    2
  • Joined

  • Last visited

Community Reputation

130 Neutral

About Degi

  • Rank
    Newbie

Personal Information

  • Interests
    |programmer|
  1. Overview As many of you know lighting a wide environment on a mobile game can be quite a challenge. Static appraches, like lightmaps, require far too much memory and dynamic tecniques too much cpu and gpu power. I will try to describe how I approached the problem In my game kepler 22 (http://www.nuoxygen.com/public/kepler22-fps.html) Kepler22 is an open world game with indoor / outdoor environment and poses some unique challenges that are seldom faced in mobile game development: A wide environment (16 square km), impossible to light-map with a reasonable resolution. Day/Night cycles requiring dynamic lighting/shadows. Shadows varying from ultra-big objects, like buildings, to very small ones, like a gun barrel. In-door omnidirectional lights. (You can see up to 7 in the same scene) A first person shooter camera that can get very near to every object and potentially reveal the pixel grain of shadow maps. All of this is usually solved in any high end pc game with a cascade of high resolution shadow maps for the outdoor and shadow-cube-maps for the indoor lights, all seasoned by PCF or some other method to smooth the pixels. Unfortunately, in this case, I had further constrains dictated by the platform. The Constrains avoid multiple render targets saving shaders cycles drawing all the solid geometry first and try to avoid blend modes (i.e. avoid multipass tecniques). The last one constraint is required by the Power VR architecture and I want to explain it better. It relates to a tecnique that Power VR uses to save shader cycles. I can't ensure the description is 100% accurate but this is roughly how it works: If you are not using a blend mode, Power VR GPUS write the frame buffer without running the fragment shader, they just mark a region of the screen with a triangle id. As the scene is complete (or when you first draw something using a blend mode), the GPU then runs the fragment shaders just once per pixel selecting the shader and its inputs based on the triangle id. This process serves the purpose of running the shader exactly once per pixel regardless the scene depth complexity (overdraw level). The final goal being to allow for fairly more complex shaders. So the best way to use the Power VR GPUs (any good IOS game developer should know) is: draw all the opaque stuff first. Here you can use even a quite complex shaders (read: you can use per pixel lighting) then draw all the alpha tested stuff (part of the shader runs to determine the test result). draw your alpha blended geometry last. As you start doing that, all the shaders are evaluated once per pixel and following that the gpu operates in a standard way, i.e. after this point then you start trading pixel shader clocks for the depth complexity. (this suggests to keep simple shaders on the particles and translucents !) avoid as much as possible render target switches (expensive !!) I decided to go with shadow volumes because: unlike light maps they can get the same quality on static and dynamic objects and pose no additional memory requirements (at least not in direct connection with the world size). Unlike shadow maps they handle nicely near and far away objects at any possible size scale. Omnidirectional and directional lights cost the same (shadow maps would require cube maps and multiple depth targets) Again this wasn't enough so I made some trade-offs to unload the GPU: Ignore light coming from other rooms: in this implementation light doesn't propagate through the door to the neighboring rooms. Just one per-pixel shadow-casting light per room. (The strongest one). Other lights evaluated per vertex. The trade-offs allowed me to draw all the lights in a single pass. The traditional shadow volumes render loop: Draw scene ambient light (this also initializes the depth buffer) Enable alpha blend (additive mode) For every light in the sceneclear the stencil compute and draw the shadow volumes of the light render the lighting contribution from the light (using stencil test to leave shadowed parts dark) Render translucent surfaces enabling the appropriate alpha blending mode. Apply post processing stuff Then became: draw all the opaque geometry. For every object use the appropriate lights set. In a single pass the shader takes care of ambient lighting, direct lighting and fog. draw all the shadow volumes. Every object casts ONE shadow from the light which is lighting it. Shadow volumes of walls are not drawn - they would need by far too much filling rate. Draw a translucent mask to darken the parts of the screen where the shadows are marked in the stencil. draw alpha tested/blended stuff sorted far to near using a cheap shader. post process. This new method has many Advantages because requires a single render pass (much lower fill rate) and keeps the dreaded alpha blend off until the very last stages of the process. Unfortunately, it also comes with a good share of drawbacks. New Challenges Preventing shadow volumes from a first light from leaking in the nearby rooms and interfering with the other lights' shadow volumes. The cause of this artifact is that we don't compute a stencil mask for every single light but we use a single stencil mask for marking all the shadows from all the lights at once. The problem could be theoretically solved clipping the shadow volumes in the room volume but this would be pretty expensive, especially for shader computed shadow volumes. Adding the shadows of the walls would be even worst: all the world would be shadowed because, while the traditional algorithm lights an object with the sum of all the not-occluded lights, the simplified one shadows the pixel if at least one shadow (from any light) drops on the pixel. Another challenge: Modulating the shadows so that they are more faint if the receiver is far from the light (it receives less direct and more ambient light). When the receiver is so distant from the light that the direct contribution into the shader is null and only the ambient light is left, the object should cast no shadow at all. In the presence of fog, modulating the shadows so that they are more faint if the receiver is far from the observer and when an object fades out in the fog its shadows fades out with him. Preventing the shadows from highlighting the objects' smoothed edges with self-shadowing. Handle objects across the rooms (especially the doors themselves). I figured out these solutions The Light ID Map Is a screen space map. Tells which light is affecting the pixel. Solves: shadow volumes interfering with other rooms' shadow volumes Stored into 3 of the 8 bits of the stencil buffer. Extremely cheap: Drawn simultaneously with solid geometry, don't requires extra vertex or pixel shader executions. The shadow Intensity Map Is a screen space map Tells how much a pixel is affected by the shadow. (0 = fully affected, 255 = unaffected) Solves: modulating the shadow, preventing the shadow from revealing the casters' edges. Stored into the alpha channel of the color buffer. Extremely cheap: Drawn simultaneously with solid geometry, don't requires extra vertex or pixel shader executions. How the Light ID Map is implemented Step1 - To be done During the first pass (solid geometry drawing) Before drawing an object. The CPU must configure the light ID of the light which is lighting the object to be drawn. Remember: every object is lit only by a single per-pixel shadow-casting light so there is a 1 by 1 relation between objects and (per pixel) lights. The following code forces the GPU to mark the upper stencil bits of the pixels of the object with the light-id. void GLESAdapter::WriteLightId(int light_id) { glEnable(GL_STENCIL_TEST); // always passes, refrence has 3 msbits bits set as the light id, 5 lsbits set at a midway value // (note you can't allow the counter to wrap else it would corrupt the ID. // so _stecil_offset must be half the counter range). glStencilFunc(GL_ALWAYS, (light_id Article Update Log 22 September 2016: Initial release
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!