Sign in to follow this  
Cypher19

OpenGL Skewbe Mapping details finally presented

Recommended Posts

Summary This topic presents skewbe maps, which are a new way of representing cubemaps. The benefit that they provide is a virtually infinite level of detail in certain areas around a cubemap, reducing the quality of a cubemap elsewhere, where it is not required. Skewbe Map Algorithm In the past, for simplicity’s sakes, the point of views used for a cubemap are typically done parallel to the X, Y, and Z Image hosted by Photobucket.com In order to focus detail around any arbitrary direction, the faces must not have simple 90d field of views, and the views must not be parallel with the axes of the world, as shown below: Image hosted by Photobucket.com Skewbe mapping is composed of four parts, two in each segment. The first pass involves building the skewbemap from the standard cubemap origin (e.g. for a reflection, the center of an object, or for omni shadow maps, a light source). First, it must be rotated so that one face (X+, arbitrarily chosen) is looking at the focus point. Secondly, the field of view on that face is lowered, creating the skew in the cubemap, hence the name. For all of the generation, maintaining continuity between edges is very crucial so that there are no empty spots between faces, or where portions of the scene are rendered twice. The second pass is rendering the scene from the viewer’s perspective, and determining what the correct lookup vector required is. In this part, the rotation and skew must be undone in a pixel shader so that the right location on the skewbemap is sampled. Generating the Skewbe View Matrices The first part required is rotating the cubemap into position, which is done by creating new view matrices. To start, we need to generate the new axes to align the views to: Sx, Sy, and Sz. Sx is calculated by normalizing the cube origin-to-focus point vector. To calculate Sz, do the cross product with Sx and the up vector, (0,1,0), and then normalize Sz. To calculate Sy, do a cross product between Sx and Sz, and then normalize Sy. The new view matrices will all use the cube origin’s position as the view location, and each be focused along one of the axes’ directions. The roll of each view will also have to be modified as well, in order to retain continuity between faces. That is, the faces aligned along Sz will need to use Sy as the up vector, the faces aligned along Sy will need to use Sx as the up vector, and the faces aligned along Sx will simply use (0,1,0) as their up vector. The matrices required are shown below. L represents the location of the cubemap origin in world space. Image hosted by Photobucket.com Generating the Skewbe Perspective Matrices Next, we need to skew the cubemap, which requires new perspective matrices for the rendering. The faces along Sx are simple, as their field of views, Ff and Fr are generated fairly arbitrarily, and use a perspective matrix with a uniform field of view. However, because we are using orthogonal axes and unbalanced field of views for the faces aligned along Sy and Sz, we need to use an offcenter perspective matrix. The perspective matrix math required is shown below. Image hosted by Photobucket.com Render the Shadow Map into the scene For rendering the main scene, the only difference from cube maps is determining the appropiate shadow map texel to lookup for the shadow determination. For a normal cubemap, the vertex to light vector is calculated, and then in the pixel shader that vector is used for the texture cubemap lookup. However, for skewbe mapping, we need to reverse the rotation and the skew that the generated maps contain. First, the lookup vector is rotated by multiplying it with a change-of-basis matrix. The change of basis required is shown below: Image hosted by Photobucket.com Then, in a pixel shader, two extra steps have to be done. To start off, the lookup vector is determined normally. The first modification to l is the following: Image hosted by Photobucket.com This correctly lines up all of the texels for the X+ and X− faces, as well as all of the edges between faces. The Y and Z faces are still misaligned, however. Before realigning the faces, we must make sure that l is pointing to either the Y or Z faces, by finding the longest component of l and checking to see whether or not it is equal to lx. If the two are not equal, then the following calculations are performed: Image hosted by Photobucket.com After obtaining the correct lookup vector, regular cubemap texture lookups can be done. Determining A Focus Point and Field Of View One essential part of the algorithm is choosing the right focus point and field of views, Ff and Fr, to base everything else off of. Because I developed skewbemaps using omnidirectional shadow maps, these formulae and ideas are immediately applicable to that application. Other uses, such as reflections, could use similar means, or something else entirely. I found there are three practical possibilities that one can use for omni shadow maps. The first is the simplest. The focus point is just the location of the camera, and the field of view is 45d. In almost every case, the result will be essentially double the shadow map resolution for a small cost. That is, a skewbe map of resolution 1024x1024 will look as good as a regular shadow map of resolution 2048x2048 in almost all cases. The second alternative is to base the field of view on the distance of the camera from the light, d, and an arbitrary constant, b. b will have to be determined experimentally, but smaller values of it will give sharper shadows around the camera, and lower quality shadows further away. The inverse is true. Image hosted by Photobucket.com Lastly, the best option is to shoot a ray out from the camera’s position, in the direction that the camera is facing. The focus point used will be the first collision between that ray and any object in the environment. With this, we can use the distance from the camera to the collision point, r. It is also possible to determine how fine the shadow map texel distribution is in screen space by adding three other variables: Np, the number of onscreen pixels, Rscreen, the resolution of the screen, and Rshadow, the shadow map resolution. Then, the field of views are determined as such: Image hosted by Photobucket.com In general though, it is obvious that this algorithm is, for the most part, very case-based, and an algorithm maybe not always provide the results that the programmer, artist or level designer may want. Due to the fact that the focal points and FOVs are entirely arbitrary, it is also possible to just keep the focus point and field of view entirely predetermined. In a game level, both could be set by an artist or level designer so that they can get consistent and aesthetically pleasing results in the environment. A performance jump can be obtained as well, by optimizing skewbe map placement, as shown below: Image hosted by Photobucket.com In the left image, the lower face will have an extra pass, or at the very least have to render the other two objects an additional time. The right image demonstrates a more optimal distribution, in which that extra pass can be skipped, or only require the two objects to be rendered once each, not twice. When designing a cinematic sequence, an animator could even decide to have faster and higher quality shadows around a point of interest, such as a character in the scene. He or she would take advantage of skewbe maps by using a lower resolution skewbe map, focus it on the point of interest, and lowering the field of view until aliasing becomes apparent. Results In a highly unoptimized solution, performance on an X800Pro with a P4 3.0gHz was as follows: At 800x600, with the pixel shader running on the entire screen - Without skewbe mapping was 130fps. With was 113fps. At 1600x1200, without skewbe mapping was 93fps. With was 65fps. The large dip is partly due to the extra branching that has to be simulated on an SM2.b HLSL compile target, and the fact that it is unoptimized. Here are some example images of the omni shadow mapping using skewbe mapping: Without skewbe mapping: With: Without: With: Future work Skewbe maps have several areas of future development. One obvious direction is to develop better formulas for determining focus points and field of views, with the intent of, say, providing higher shadow quality in problematic situations. Another is to use non-orthogonal axes to base the view matrices off of. The reason for this is that the side faces often have some wasted area that the camera cannot see. By moving the axes for the side faces in certain ways, less area is wasted, and even lower resolution maps be used. A note to whoever tries this idea: The result is kind of like a "frustum" map, and, yes, it does work, as it was an earlier solution to my shadow map woes. However, there are issues with it, such as holes in the map, and/or areas being double-rendered. I doubt that problem can be overcome, but a frustum map is definitely a plausible idea. Acknowledgements A special thanks to Andy "Redbeard" Campbell, Paul "Moopy" Malin, and Anthony "Sages" Whitaker Jr. for suggesting the idea of using offcenter perspective matrices for the skewbe mapping algorithm. As well, thanks to Dylan "PfhorSlayer" Barrie for doing a test implementation and verifying some of the math in his OpenGL implementation; Sean "Washu" Kent for LaTeX support and for formatting two of the equations; John Carmack for providing such a high goal for me to aim for; the Microsoft DirectX team for providing such an excellent SDK to work with, including the media provided and the D3DX framework; Photobucket for hosting the pictures used; GameDev.net for providing a place to present this; and the community on the IRC channel #graphicsdev on irc.afternet.org for helping me get started with 3D graphics, and sticking with me the whole way. [Edited by - Cypher19 on September 25, 2005 8:57:33 PM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628402
    • Total Posts
      2982470
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now