Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

130 Neutral

About Degi

  • Rank

Personal Information

  • Interests
  1.   The problem of shadow leaking is not connected to weather the light is occluded or subtracted (actually multiplied) in a second time. It is instead due to the fact that the engine doesn't take into account the walls when computing the mask. This is done because it would require a very high fill rate and would cause a sensible performance decrease. Additionally, even if the the engine drew the walls shadows volumes, doing this for all the lights at once would result in a completely occluded stencil buffer (every pixel is occluded with respect to some light). You should instead alternate (for all the lights, one at a time) shadow volume computation and render of the light contribution, this in turn would require separate targets or additive blending which are a no-no for power-vr gpus (because it prevents them from applying all the optimizations that make them ultra-fast compared to the limited energy they use).   Of course occluding light is the right way to go and should be followed when the platform power is enough. Especially if more than one light affects the same objects this would make a difference. ( I doubt some players would notice the difference, a developer surely would ).
  2. I started developing the Engine in 2003, when it was pretty difficult to access the technology in any other way.  I did it for fun and after a few years I realized the engine was mature enough to be used to develop some games (It was quite powerful at the time, having a dynamic lighting system and hdr). So I developed a few games (among them 'monster trouble' game of the week on the apple store in 2011, 'ikaro racing' and 'Kepler 22').   Unless you have very special requirements, nowadays developing your engine is probably not worth the pain and off the shelf solutions should be preferred.   That said, I thing if you placed a 10 squared miles terrain with 10 building, many outposts, hundreds of rooms and lights (all of them casting shadows) and 500 enemies in a single Unity level you wouldn't get a playable frame rate on mobiles, but I can't know for sure 'til I try - I may be wrong.   Based on my experience, the advantage of using a standard engine is that whatever feature you need you don't need to develop it by yourself: you just need to wait. On the other end the cons of using a standard engine is that, if you need a new feature, you have to wait :-)
  3. HI,   Don't know if I understand correctly the problem. Isn.t it a trade off betwen quality and speed ? I think you probably have already faced many other. I personally haven't ever done tyre trails but I suppose I would start with the normal mapped version, even with a temporary map, just to see how much it impacts the frame rate. I think you will need alpha blend to mix two overlaping tracks, the upper tracks being opaque/overriding the lower one only where the second car has 'remodeled' the sand shape passing on it and fading at the border of such parts to avoid abrupt changes in the normal direction and color. Also you will need to render the tracks in the order in which the cars generated them. If you have far cry 3, you can take a look at its map editor (it comes with the game). It has a nice track-editing utility and you can inspect the result closely/experiment to get some inspiration.
  4. Overview As many of you know lighting a wide environment on a mobile game can be quite a challenge. Static appraches, like lightmaps, require far too much memory and dynamic tecniques too much cpu and gpu power. I will try to describe how I approached the problem In my game kepler 22 (http://www.nuoxygen.com/public/kepler22-fps.html) Kepler22 is an open world game with indoor / outdoor environment and poses some unique challenges that are seldom faced in mobile game development: A wide environment (16 square km), impossible to light-map with a reasonable resolution. Day/Night cycles requiring dynamic lighting/shadows. Shadows varying from ultra-big objects, like buildings, to very small ones, like a gun barrel. In-door omnidirectional lights. (You can see up to 7 in the same scene) A first person shooter camera that can get very near to every object and potentially reveal the pixel grain of shadow maps. All of this is usually solved in any high end pc game with a cascade of high resolution shadow maps for the outdoor and shadow-cube-maps for the indoor lights, all seasoned by PCF or some other method to smooth the pixels. Unfortunately, in this case, I had further constrains dictated by the platform. The Constrains avoid multiple render targets saving shaders cycles drawing all the solid geometry first and try to avoid blend modes (i.e. avoid multipass tecniques). The last one constraint is required by the Power VR architecture and I want to explain it better. It relates to a tecnique that Power VR uses to save shader cycles. I can't ensure the description is 100% accurate but this is roughly how it works: If you are not using a blend mode, Power VR GPUS write the frame buffer without running the fragment shader, they just mark a region of the screen with a triangle id. As the scene is complete (or when you first draw something using a blend mode), the GPU then runs the fragment shaders just once per pixel selecting the shader and its inputs based on the triangle id. This process serves the purpose of running the shader exactly once per pixel regardless the scene depth complexity (overdraw level). The final goal being to allow for fairly more complex shaders. So the best way to use the Power VR GPUs (any good IOS game developer should know) is: draw all the opaque stuff first. Here you can use even a quite complex shaders (read: you can use per pixel lighting) then draw all the alpha tested stuff (part of the shader runs to determine the test result). draw your alpha blended geometry last. As you start doing that, all the shaders are evaluated once per pixel and following that the gpu operates in a standard way, i.e. after this point then you start trading pixel shader clocks for the depth complexity. (this suggests to keep simple shaders on the particles and translucents !) avoid as much as possible render target switches (expensive !!) I decided to go with shadow volumes because: unlike light maps they can get the same quality on static and dynamic objects and pose no additional memory requirements (at least not in direct connection with the world size). Unlike shadow maps they handle nicely near and far away objects at any possible size scale. Omnidirectional and directional lights cost the same (shadow maps would require cube maps and multiple depth targets) Again this wasn't enough so I made some trade-offs to unload the GPU: Ignore light coming from other rooms: in this implementation light doesn't propagate through the door to the neighboring rooms. Just one per-pixel shadow-casting light per room. (The strongest one). Other lights evaluated per vertex. The trade-offs allowed me to draw all the lights in a single pass. The traditional shadow volumes render loop: Draw scene ambient light (this also initializes the depth buffer) Enable alpha blend (additive mode) For every light in the sceneclear the stencil compute and draw the shadow volumes of the light render the lighting contribution from the light (using stencil test to leave shadowed parts dark) Render translucent surfaces enabling the appropriate alpha blending mode. Apply post processing stuff Then became: draw all the opaque geometry. For every object use the appropriate lights set. In a single pass the shader takes care of ambient lighting, direct lighting and fog. draw all the shadow volumes. Every object casts ONE shadow from the light which is lighting it. Shadow volumes of walls are not drawn - they would need by far too much filling rate. Draw a translucent mask to darken the parts of the screen where the shadows are marked in the stencil. draw alpha tested/blended stuff sorted far to near using a cheap shader. post process. This new method has many Advantages because requires a single render pass (much lower fill rate) and keeps the dreaded alpha blend off until the very last stages of the process. Unfortunately, it also comes with a good share of drawbacks. New Challenges Preventing shadow volumes from a first light from leaking in the nearby rooms and interfering with the other lights' shadow volumes. The cause of this artifact is that we don't compute a stencil mask for every single light but we use a single stencil mask for marking all the shadows from all the lights at once. The problem could be theoretically solved clipping the shadow volumes in the room volume but this would be pretty expensive, especially for shader computed shadow volumes. Adding the shadows of the walls would be even worst: all the world would be shadowed because, while the traditional algorithm lights an object with the sum of all the not-occluded lights, the simplified one shadows the pixel if at least one shadow (from any light) drops on the pixel. Another challenge: Modulating the shadows so that they are more faint if the receiver is far from the light (it receives less direct and more ambient light). When the receiver is so distant from the light that the direct contribution into the shader is null and only the ambient light is left, the object should cast no shadow at all. In the presence of fog, modulating the shadows so that they are more faint if the receiver is far from the observer and when an object fades out in the fog its shadows fades out with him. Preventing the shadows from highlighting the objects' smoothed edges with self-shadowing. Handle objects across the rooms (especially the doors themselves). I figured out these solutions The Light ID Map Is a screen space map. Tells which light is affecting the pixel. Solves: shadow volumes interfering with other rooms' shadow volumes Stored into 3 of the 8 bits of the stencil buffer. Extremely cheap: Drawn simultaneously with solid geometry, don't requires extra vertex or pixel shader executions. The shadow Intensity Map Is a screen space map Tells how much a pixel is affected by the shadow. (0 = fully affected, 255 = unaffected) Solves: modulating the shadow, preventing the shadow from revealing the casters' edges. Stored into the alpha channel of the color buffer. Extremely cheap: Drawn simultaneously with solid geometry, don't requires extra vertex or pixel shader executions. How the Light ID Map is implemented Step1 - To be done During the first pass (solid geometry drawing) Before drawing an object. The CPU must configure the light ID of the light which is lighting the object to be drawn. Remember: every object is lit only by a single per-pixel shadow-casting light so there is a 1 by 1 relation between objects and (per pixel) lights. The following code forces the GPU to mark the upper stencil bits of the pixels of the object with the light-id. void GLESAdapter::WriteLightId(int light_id) { glEnable(GL_STENCIL_TEST); // always passes, refrence has 3 msbits bits set as the light id, 5 lsbits set at a midway value // (note you can't allow the counter to wrap else it would corrupt the ID. // so _stecil_offset must be half the counter range). glStencilFunc(GL_ALWAYS, (light_id Article Update Log 22 September 2016: Initial release
  5. I suppose this is a little bit a trial and error process.You can figure out a strategy but you never know how good it looks before you have implemented. The first thing I would try is to have a separate basic weapon animation and shot animation. The basic weapon aim could have different segments for "taking out the weapon", "transition all the way from aiming up to aiming down", "reload", etc.. When the weapon fires I would override the wrist rotation with data from the shot animation. Also, if you want the animation to be more rich and involve the arm and forearm you could examine which kind of rotation the recoil causes on those bones and apply it manually. On my engine in pseudocode this would look about like this and should be executed at each frame. Apply_kf_animation(name=up_down_aim, frame = somelinearfunction(aiming_angle), root = character_root);  // resets to the aiming position if (time_from_trigger < animation_time) {    Apply_kf_animation(name = wrist_shot, frame = some_exponentialdecay(time_from_trigger), root = character_wrist).    Apply_rotation(joint = arm, axis = ?, some_exponentialdecay(time_from_trigger) * RECOIL_ON_ARM);    Apply_rotation(joint = forearm, axis = ?, some_exponentialdecay(time_from_trigger) * RECOIL_ON_FOREARM); } Again, I dont' know if this is good enough but is the first thing I would try. Then much depend on what your engine has to offer. If your engine has Ik, just apply the movement to the handgun and let it figure out the rest.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!