Sign in to follow this  
Followers 0

Skewbe Mapping details finally presented

1 post in this topic

Summary This topic presents skewbe maps, which are a new way of representing cubemaps. The benefit that they provide is a virtually infinite level of detail in certain areas around a cubemap, reducing the quality of a cubemap elsewhere, where it is not required. Skewbe Map Algorithm In the past, for simplicity’s sakes, the point of views used for a cubemap are typically done parallel to the X, Y, and Z Image hosted by In order to focus detail around any arbitrary direction, the faces must not have simple 90d field of views, and the views must not be parallel with the axes of the world, as shown below: Image hosted by Skewbe mapping is composed of four parts, two in each segment. The first pass involves building the skewbemap from the standard cubemap origin (e.g. for a reflection, the center of an object, or for omni shadow maps, a light source). First, it must be rotated so that one face (X+, arbitrarily chosen) is looking at the focus point. Secondly, the field of view on that face is lowered, creating the skew in the cubemap, hence the name. For all of the generation, maintaining continuity between edges is very crucial so that there are no empty spots between faces, or where portions of the scene are rendered twice. The second pass is rendering the scene from the viewer’s perspective, and determining what the correct lookup vector required is. In this part, the rotation and skew must be undone in a pixel shader so that the right location on the skewbemap is sampled. Generating the Skewbe View Matrices The first part required is rotating the cubemap into position, which is done by creating new view matrices. To start, we need to generate the new axes to align the views to: Sx, Sy, and Sz. Sx is calculated by normalizing the cube origin-to-focus point vector. To calculate Sz, do the cross product with Sx and the up vector, (0,1,0), and then normalize Sz. To calculate Sy, do a cross product between Sx and Sz, and then normalize Sy. The new view matrices will all use the cube origin’s position as the view location, and each be focused along one of the axes’ directions. The roll of each view will also have to be modified as well, in order to retain continuity between faces. That is, the faces aligned along Sz will need to use Sy as the up vector, the faces aligned along Sy will need to use Sx as the up vector, and the faces aligned along Sx will simply use (0,1,0) as their up vector. The matrices required are shown below. L represents the location of the cubemap origin in world space. Image hosted by Generating the Skewbe Perspective Matrices Next, we need to skew the cubemap, which requires new perspective matrices for the rendering. The faces along Sx are simple, as their field of views, Ff and Fr are generated fairly arbitrarily, and use a perspective matrix with a uniform field of view. However, because we are using orthogonal axes and unbalanced field of views for the faces aligned along Sy and Sz, we need to use an offcenter perspective matrix. The perspective matrix math required is shown below. Image hosted by Render the Shadow Map into the scene For rendering the main scene, the only difference from cube maps is determining the appropiate shadow map texel to lookup for the shadow determination. For a normal cubemap, the vertex to light vector is calculated, and then in the pixel shader that vector is used for the texture cubemap lookup. However, for skewbe mapping, we need to reverse the rotation and the skew that the generated maps contain. First, the lookup vector is rotated by multiplying it with a change-of-basis matrix. The change of basis required is shown below: Image hosted by Then, in a pixel shader, two extra steps have to be done. To start off, the lookup vector is determined normally. The first modification to l is the following: Image hosted by This correctly lines up all of the texels for the X+ and X− faces, as well as all of the edges between faces. The Y and Z faces are still misaligned, however. Before realigning the faces, we must make sure that l is pointing to either the Y or Z faces, by finding the longest component of l and checking to see whether or not it is equal to lx. If the two are not equal, then the following calculations are performed: Image hosted by After obtaining the correct lookup vector, regular cubemap texture lookups can be done. Determining A Focus Point and Field Of View One essential part of the algorithm is choosing the right focus point and field of views, Ff and Fr, to base everything else off of. Because I developed skewbemaps using omnidirectional shadow maps, these formulae and ideas are immediately applicable to that application. Other uses, such as reflections, could use similar means, or something else entirely. I found there are three practical possibilities that one can use for omni shadow maps. The first is the simplest. The focus point is just the location of the camera, and the field of view is 45d. In almost every case, the result will be essentially double the shadow map resolution for a small cost. That is, a skewbe map of resolution 1024x1024 will look as good as a regular shadow map of resolution 2048x2048 in almost all cases. The second alternative is to base the field of view on the distance of the camera from the light, d, and an arbitrary constant, b. b will have to be determined experimentally, but smaller values of it will give sharper shadows around the camera, and lower quality shadows further away. The inverse is true. Image hosted by Lastly, the best option is to shoot a ray out from the camera’s position, in the direction that the camera is facing. The focus point used will be the first collision between that ray and any object in the environment. With this, we can use the distance from the camera to the collision point, r. It is also possible to determine how fine the shadow map texel distribution is in screen space by adding three other variables: Np, the number of onscreen pixels, Rscreen, the resolution of the screen, and Rshadow, the shadow map resolution. Then, the field of views are determined as such: Image hosted by In general though, it is obvious that this algorithm is, for the most part, very case-based, and an algorithm maybe not always provide the results that the programmer, artist or level designer may want. Due to the fact that the focal points and FOVs are entirely arbitrary, it is also possible to just keep the focus point and field of view entirely predetermined. In a game level, both could be set by an artist or level designer so that they can get consistent and aesthetically pleasing results in the environment. A performance jump can be obtained as well, by optimizing skewbe map placement, as shown below: Image hosted by In the left image, the lower face will have an extra pass, or at the very least have to render the other two objects an additional time. The right image demonstrates a more optimal distribution, in which that extra pass can be skipped, or only require the two objects to be rendered once each, not twice. When designing a cinematic sequence, an animator could even decide to have faster and higher quality shadows around a point of interest, such as a character in the scene. He or she would take advantage of skewbe maps by using a lower resolution skewbe map, focus it on the point of interest, and lowering the field of view until aliasing becomes apparent. Results In a highly unoptimized solution, performance on an X800Pro with a P4 3.0gHz was as follows: At 800x600, with the pixel shader running on the entire screen - Without skewbe mapping was 130fps. With was 113fps. At 1600x1200, without skewbe mapping was 93fps. With was 65fps. The large dip is partly due to the extra branching that has to be simulated on an SM2.b HLSL compile target, and the fact that it is unoptimized. Here are some example images of the omni shadow mapping using skewbe mapping: Without skewbe mapping: With: Without: With: Future work Skewbe maps have several areas of future development. One obvious direction is to develop better formulas for determining focus points and field of views, with the intent of, say, providing higher shadow quality in problematic situations. Another is to use non-orthogonal axes to base the view matrices off of. The reason for this is that the side faces often have some wasted area that the camera cannot see. By moving the axes for the side faces in certain ways, less area is wasted, and even lower resolution maps be used. A note to whoever tries this idea: The result is kind of like a "frustum" map, and, yes, it does work, as it was an earlier solution to my shadow map woes. However, there are issues with it, such as holes in the map, and/or areas being double-rendered. I doubt that problem can be overcome, but a frustum map is definitely a plausible idea. Acknowledgements A special thanks to Andy "Redbeard" Campbell, Paul "Moopy" Malin, and Anthony "Sages" Whitaker Jr. for suggesting the idea of using offcenter perspective matrices for the skewbe mapping algorithm. As well, thanks to Dylan "PfhorSlayer" Barrie for doing a test implementation and verifying some of the math in his OpenGL implementation; Sean "Washu" Kent for LaTeX support and for formatting two of the equations; John Carmack for providing such a high goal for me to aim for; the Microsoft DirectX team for providing such an excellent SDK to work with, including the media provided and the D3DX framework; Photobucket for hosting the pictures used; for providing a place to present this; and the community on the IRC channel #graphicsdev on for helping me get started with 3D graphics, and sticking with me the whole way. [Edited by - Cypher19 on September 25, 2005 8:57:33 PM]

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Ivy_ZH
      I want to use OpenGL to show a STL model . But why the picture look like in 2D ?  
      Public Sub CreateGLPannel(Pannel As PictureBox) Dim PixelFormat As GLuint Dim pfd As PIXELFORMATDESCRIPTOR With pfd .cColorBits = Bits .cDepthBits = 16 .dwFlags = PFD_DRAW_TO_WINDOW Or PFD_SUPPORT_OPENGL Or PFD_DOUBLEBUFFER .iLayerType = PFD_MAIN_PLANE .iPixelType = PFD_TYPE_RGBA .nSize = Len(pfd) .nVersion = 1 End With PixelFormat = ChoosePixelFormat(Pannel.hdc, pfd) SetPixelFormat Pannel.hdc, PixelFormat, pfd hrc = wglCreateContext(Pannel.hdc) wglMakeCurrent Pannel.hdc, hrc InitGL ResizeGLScene Pannel, 4000, 4000, 4000 End Sub Private Sub InitGL() glShadeModel smSmooth glClearColor 0, 0, 0, 0 glClearDepth 1 glEnable glcDepthTest glDepthFunc cfLEqual glHint htPerspectiveCorrectionHint, hmNicest Lighting End Sub Private Sub Lighting() Dim Specular(0 To 3) As GLfloat Specular(0) = 0.5: Specular(1) = 1.5: Specular(2) = 2.5: Specular(3) = 3.5 glMaterialf faceFrontAndBack, Shininess, 0.5 glMaterialf faceFrontAndBack, mprSpecular, Specular(0) glMaterialf faceFrontAndBack, AmbientAndDiffuse, 0 glEnable glcColorMaterial glLightf ltLight0, lpmPosition, -100 glEnable glcLighting glEnable glcLight0 End Sub Public Sub DrawPart() glMatrixMode mmModelView glLoadIdentity Select Case isfrontview Case True gluLookAt 0, 0, 0, 0, -1, 0, 1, 0, 0 Case False gluLookAt 0, 0, 0, 0, 0, -1, 0, 1, 0 End Select Select Case zoomcounter Case 1.5 glScalef 1.5, 1.5, 1.5 Case 2 glScalef 2, 2, 2 Case 2.5 glScalef 2.5, 2.5, 2.5 Case 3 glScalef 3, 3, 3 End Select glTranslatef gpQuadX, 1000, -800 glRotatef grQuadX, 1, 0, 0 glRotatef grQuadY, 0, 1, 0 glRotatef grQuadZ, 0, 0, 1 DrawPartList End Sub Private Sub DrawPartList() Dim i As Integer Dim Temp As Integer Dim TrangleCounter As Integer TrangleCounter = CInt((PartVertexCounter - 1) / 3) glColor3f PartColor.R, PartColor.G, PartColor.B glBegin bmTriangles For i = 1 To TrangleCounter Temp = (i - 1) * 3 + 1 glVertex3f Abs_PartVertex(Temp).x, Abs_PartVertex(Temp).y, Abs_PartVertex(Temp).z glVertex3f Abs_PartVertex(Temp + 1).x, Abs_PartVertex(Temp + 1).y, Abs_PartVertex(Temp + 1).z glVertex3f Abs_PartVertex(Temp + 2).x, Abs_PartVertex(Temp + 2).y, Abs_PartVertex(Temp + 2).z Next i glEnd End Sub  

    • By Freezee
      Hi, I'm 16 years old (nearly 17) french guy who loves coding, and pizzas, and music, and a lot of other things...
      I started learning some programming languages 6 years ago, and always failed to achieve something with them. Then I decided to re-try Java 2 years ago, and it went pretty well. So well that from this time I did not stopped programming. I really started to dig into C++ a year ago because I wanted lower level programming skills, with one specific goal: create games. Unfortunately I always overestimate myself and my ideas, and I've not been able to create a single real game because of my lack of experience in that specific domain. So I'm looking for a 3D FPS game project (multiplayer would be great too) to see how that kind of project is managed, and to finally be able to create something. I would like for once to work with other people on the same thing, I think it could really help me to help back the others. I have a lot of free time right now and I'm ready to spend some (if not a lot) into a project.
      I learned a lot of C++ features when I started, but I feel like I'm missing a lot of other features and I want to learn them on something useful.
      I really prefer not working on a project with a pre-used game engine (GM, UE, Unity, ...) because for me the most interesting part is what happens at the lowest programming level of a game. I learned basics of modern OpenGL so if there is a graphical engine to improve, I can work on it. I'm also very interested into working on the game engine structure, and on implementing a scripting language if it's needed. If the game is multiplayer, I will not guarantee that I could really work on that (because I really don't know a lot about networking) but I'll try my best to continue learning things and maybe work on that too.
      If you're interested, feel free to contact me on Discord: Freezee#2283. If you don't have Discord, reply back a way to contact you
    • By Jon Alma
      Some time ago I implemented a particle system using billboarding techniques to ensure that the particles are always facing the viewer.  These billboards are always centered on one 3d coordinate.
      I would like to build on this and use billboarding as the basis for things like laser bolts and gunshots.  Here the difference is that instead of a single point particle I now have to draw a billboard between two points - the start and end of the laser bolt for example.  I appreciate that having two end points places limits on how much the billboard can be rotated to face the viewer, but I'm looking to code a best effort solution.  For the moment I am struggling to work out how to do this or find any tutorials / code examples that explain how to draw a billboard between two points ... can anyone help?
    • By Sagaceil
      It's always better to fight with a bro.
    • By recp
      I'm working on new asset importer ( based on COLLADA specs, the question is not about COLLADA directly
      also I'm working on a new renderer to render ( imported document.
      In the future I'll spend more time on this renderer of course, currently rendering imported (implemented parts) is enough for me
      assetkit imports COLLADA document (it will support glTF too),
      importing scene, geometries, effects/materials, 2d textures and rendering them seems working
      My actual confusion is about shaders. COLLADA has COMMON profile and GLSL... profiles,
      GLSL profile provides shaders for effects so I don't need to wory about them just compile, link, group them before render

      The problem occours in COMMON profile because I need to write shaders,
      Actually I wrote them for basic matrials and another version for 2d texture
      I would like to create multiple program but I am not sure how to split this this shader into smaller ones,

      Basic material version (only colors):
      Texture version:
      I used subroutines to bind materials, actually I liked it,
      In scene graph every node can have different program, and it switches between them if parentNode->program != node->program
      (I'll do scene graph optimizations e.g.  view frustum culling, grouping shaders... later)

      I'm going to implement transparency but I'm considering to create separate shaders,
      because default shader is going to be branching hell
      I can't generate shader for every node because I don't know how many node can be exist, there is no limit.
      I don't know how to write a good uber-shader for different cases:

      Here material struct:
      struct Material { ColorOrTexture emission; ColorOrTexture ambient; ColorOrTexture specular; ColorOrTexture reflective; ColorOrTexture transparent; ColorOrTexture diffuse; float shininess; float reflectivEyety; float transparency; float indexOfRefraction; }; ColorOrTexture could be color or 2d texture, if there would be single colorOrTex then I could split into two programs,
      Also I'm going to implement transparency, I am not sure how many program that I needed

      I'm considering to maintain a few default shaders for COMMON profile,
      1-no-texture, 2-one of colorOrTexture contains texture, 3-........

      Any advices in general or about how to optimize/split (if I need) these shaders which I provied as link?
      What do you think the shaders I wrote, I would like to write them without branching if posible,
      I hope I don't need to write 50+ or 100+ shaders, and 100+ default programs

      PS: These default shaders should render any document, they are not specific, they are general purpose...
             I'm compiling and linking default shaders when app launched

  • Popular Now