Striving For Graphics API Independence III : Texture Mapping

Published August 15, 2003 by Erik "wazoo" Yuzwa, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement
Get the source code for this article here.


Introduction
Welcome back! We've covered quite a bit of ground so far already, and we're about to take another another chunk in the world of cross-graphics API. In this installment of the series, I was struggling between either discussing Camera Control or Texture Mapping. I decided that either one would work at this time period, so I flipped a coin and Camera Control lost (sorry dude).


What is texture mapping?
I guess it would be remiss to have a tutorial on Texture Mapping without actually explaining what it is.

In case you've been living in a cave for several years now, Texture Mapping is the art (or process) of taking an image and putting it onto a vertex defined surface. (ie. a side of a cube, or a triangle or even a 3d model). Texture Mapping is more commonly known as a way of representing an object within a three dimensional world without the overhead of several hundred polygons. A case in point is to think of your favorite FPS game (mine happens to be Unreal Tournament). Enter a level and just start to look around before you begin fragging everyone; the wall is nothing more than a plain surface with a texture map applied to it. The floor is nothing more than some granite looking artwork, and the sky contains not much more than a starfield image.

Even though the majority of graphics programming is done with two dimensional texture images, there can also be one or three dimensional textures mapped onto objects as well.

One dimensional textures are basically just a pixel wide and N pixels high (where N is a power of 2). They're used in a variety of interesting ways, such as in cel shading, or in applying a height-based color in terrain mapping.

Three dimensional textures are another story. Also known as "volume textures", 3d textures are represented (naturally) by 3 coordinates, and can maybe be thought of as a collection of thin slices of textures to create a 3d image map. Wherever the volume map intersects with our game world, the texture vertex (or texel) is applied. A few years ago, volume textures were once theoretical in application, as they were considered too hidiously expensive in terms of performance to apply them within the arena of game programming. However, that trend seems to be disappearing. With the release of Direct3D8.0 from Microsoft, Volume Textures are supported in hardware while on the OpenGL side, they were included in the 1.2 Specification.

(Note that with the Direct3D8.0, the Direct3D team has created support for DXVC - DirectX Volume Compression - support. This can greatly help our engine performance with volume textures as they can be compressed by our DirectX Texture Tool before we shove them into memory.)

We can do quite a few usefull things involving Texture Mapping, and so it would be worth mentioning a few of them below:


MipMapping
Since the dawn of 3d engines, the level of detail (or LOD) has always been a necessary consideration in terms of presenting your scene. Because we need to keep our engine performance high, it's a waste to have our 3d pipeline process/render a model containing a few hundred vertices when it is too far from our camera to see it properly. Likewise, nothing's more painfull than having a beautifully crafted castle (from far away) appear all blotchy and pixelated when we move the camera closer.

Enter Mip Mapping. Mip Mapping is a process of generating different levels of your texture image, so that the pipeline can use the highest resolution image for when the object is up close, and a lower resolution image when that object is further away from the viewer.

As with 2d mip maps, we can create volumetric mip maps to support our 3d textures. Again, if our simple volume texture takes up a lot of memory, then creating a mip map chain for our 3d texture quietly procurs even more memory available to our hardware.


MultiTexturing
Although for this tutorial we are using basic texture mapping, we should be aware of the MultiTexturing capability of 3d engines available today. Up until recently, most hardware only supported single pass texture mapping. This meant that if you wanted to present some blood on the walls for the player, you would have to implement a "two pass" system in software. eg. First set our wall texture, render the vertices, then set the blood texture and render our vertices again.

Because of our newer specifications, for both OpenGL and Direct3D, our graphics pipeline can support multiple passes through our texture processing stage/area of our pipeline. Back to our example with the blood on the walls, we can now set our wall texture in one stage, set our blood texture on another stage, and then when we render the vertices of our object, both textures will be blended onto its surface.


LightMapping
Another common use of Texture Mapping is for the use of applying our own lighting information to each pixel in our scene. Although both Direct3D and OpenGL have support for dynamic lighting within their pipelines, it doesn't come for free, and the calculations get increasingly more complex as the number of lights increase within the scene.

One approach, or workaround, to this problem that works very well for static lighting is to use a texture map to contain the lighting (or alpha) information. The elements of a lightmap are usually known as "lumels" as they refer to the luminosity of the desired coordinate.

Using our mulitexturing capabilities of our hardware (as explained above), we can then have our original texture in one stage, along with our lightmap in another stage. The final blended result will contain the final effect for that vertex.


Environment Mapping
Yet another use of Texture Maps is what's known as Environment Mapping. This is a procedure where you take an entire scene and store it into a texture. You can then apply this texture to any object within your scene, giving the impression that you have a reflection of the surrounding environment on your texture. There are other types of environment mapping, but by far the two most popular are: cubic and spherical.

Cubic Environment Mapping (or cube maps) involves creating 6 textures which represents the surrounding scene as if the object in question were in the center of a cube. In most cases, the environment is mapped upon curved objects (ie. a silver bullet, or car, etc..) so we don't really need too much detail within these images to make them appear as if they're reflecting the environment.

Spherical Environment Mapping (or sphere maps) involves taking a 2d texture image of the full 360 degrees surrounding the object as if taken through a fish eye lens. One of the downsides towards using a sphere map, is that the sphere map's texture is taken from one position in the scene and applied to every object in the scene (no matter what its position). This means that objects closer to the camera won't have the proper looking reflection mapped to it.


Theory
Now that we sort of get an idea of what texture mapping is, let's try to understand what is happening (in theory). We can split up texture mapping into several steps:

  1. Load up our 2d image into a texture object
  2. Divide our 2d image into an evenly spaced grid
  3. Let the pipeline know which image (ie. texture) we're going to use
  4. Define our vertex, specifying our texture coordinate to apply
This might seem a little complicated, so I'll step through it as much as possible..


Step 1: Load our 2d image into a texture object
This step is nothing really complicated or earth shattering. We simply load our image from either a physical file or from memory. It's important to note that although the newer 3d hardware is more liberal in terms of "weird" texture image sizes, it's generally a good idea to keep them to NxM, where N and M are a power of 2. (ie. 32x32 or 64x128). You might also want to stay with a maximum texture size of 256x256. This number is from a limitation within the Voodoo line (3dFX) of video hardware.

Although there are excellent image loading libraries out there (such as DevIL, etc), I chose to keep things simple for the purposes of this tutorial. With Direct3D8, we can use the D3DX library to load the image resource, and for OpenGL I just stuck with an old-fashioned bitmap loading routine using the glaux library.

Our next step is to then put this image data into a TextureObject that we can use from our graphics pipeline.

//This is for Direct3D. This is an EASY thing to do! LPDIRECT3DTEXTURE8 lpTexture = NULL; hr = D3DXCreateTextureFromFile(m_lpD3DDevice, strFilename.c_str(), &lpTexture); //THAT'S IT! We've loaded our image into a texture! //OpenGL isn't QUITE so simple, but it's not that bad. OpenGL doesn't really //come with a helper library like D3DX, which is where the image loading code //comes in. We just need to load in the image, then create the texture within //OpenGL through several steps GLuint tex; TextureImage *pImage = LoadImage(strFilename); glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); glTexImage2D(GL_TEXTURE_2D, pImage->channels, pImage->sizeX, pImage->sizeY, textureType, GL_UNSIGNED_BYTE, pImage->data); //That's about it for OpenGL. There's always some //tinkering that can be done, but this is basically it.
Step 2: Divide our image into an evenly spaced grid
This step is more for you the programmer than for the 3d engine. Basically, when you load up the image into a texture as done in step 1, the pipeline references your texture via a 2d coordinate system. By dividing up your texture into this coordinate system, it is much easier to specify which section of the texture belongs to which vertex on your 3d surface. The common nomenclature of texture mapping is to use U and V to specify the (X,Y) coordinate within the texture.The U,V pair within a texture only runs from 0.0 to 1.0, since a texture can be virtually any NxN image.

Note that another thing to remember about Direct3D and OpenGL, is that the (0.0f,0.0f) U,V corner is different for the two API's. In OpenGL, the (0.0f, 0.0f) is on the lower-left of the texture grid, whereas in Direct3D, the (0.0f,0.0f) is on the upper-left corner of the texture grid.

OpenGL Direct3D (0.0f, 1.0f) (1.0f, 1.0f) (0.0f, 0.0f) (1.0f, 0.0f) |---------------------| |------------------------| | | | | V | | V | | | | | | | | | | |---------------------| |------------------------| (0.0f, 0.0f) U (1.0f, 0.0f) (0.0f, 1.0f) U (1.0f, 1.0f) Because of this friendly invertedness within the two API's, I decided that we would stick to the OpenGL texture mapping coordinate system when specifying our texture information. In a future tutorial, you'll see why we went this route.


Step 3: Let the pipeline know which texture to use
Now that we've loaded our texture and know which coordinates of the texture to map to our 3d surface, we need to let the pipeline know which texture to actually use.

One important thing to remember is that whenever you switch to a texture (or vertex buffer for that matter) within the graphics pipeline, you are creating overhead and a loss of time while the pipeline makes the switch. Try to minimize this switching as much as possible. When you switch to a new texture, try to render every object in your scene using this texture, before moving to the next one. Select your favorite object sorting algorithm such as qsort or something to organize your objects by texture. Because of the way the 3d pipelines handle their clipping algorithms, it's also a good idea to sort your objects by depth so that you render your scene from front to back.

Moving on, specifying a texture is a trivial thing to do with either API.

//Direct3D m_lpD3DDevice->SetTexture(0, lpTexture); //OpenGL glBindTexture(GL_TEXTURE_2D, tex);
Step 4: Define our texture vertex
We've pretty much moved on to our last step. Here is where we then define our surface within the game world. There's nothing that special that we really need to do. We simply let the pipeline know which texture coordinate to apply to which vertex of our surface. The following code might help explain this in better detail.

//Direct3D using our vertex buffer object created in //the last tutorial m_pVertices[m_iVerticesInBuffer].vecPos = pos; m_pVertices[m_iVerticesInBuffer].dwColor = D3DCOLOR_COLORVALUE(r, g, b, a); m_pVertices[m_iVerticesInBuffer].tu = tex.x; m_pVertices[m_iVerticesInBuffer].tv = -tex.y; //because we are using the OpenGL texture mapping coordinate //(u,v) specification we need to invert our y-axis (or v coordinate) //OpenGL using our vertex buffer object created in //the last tutorial glColor4f(r, g, b, a); glTexCoord2f(tex.x, tex.y); glVertex3f(pos.x, pos.y, pos.z);
Bring it on home Wazoo!
Now that we've covered the basic principles of texture mapping with both OpenGL and Direct3D, it's time to put it in our DLL rendering system! For starters, I decided to create a textureManagerInterface object which would reside within our rendererInterface object. Because of the slightly differing methods of creating/storing a texture within OpenGL and Direct3D8, I then implemented our textureManagerInterface object as OGLTextureManager and D3DTextureManager. Within each Manager object, I just decided to use an STL Map container to store the primary key (our identifier for the texture eg. LOGO) along with the actual texture data. This way we can dynamically add as many textures to the system as we wish, while hopefully keeping the time it takes for our system to find our texture very small. Download the updated library which is included with this article and be sure to check out the modified methods within the vbInterface and textureManagerInterface objects.

//within our winmain.cpp #define LOGO 1 //... if(FAILED(hr = m_pRenderer->getTextureInterface()-> AddTexture("data\\textures\\wazooPresents.bmp", LOGO))){ return hr; } //... //First set our texture in the graphics pipeline pInterface->getTextureInterface()->setTexture(LOGO); //lock down our video memory if(SUCCEEDED(pInterface->m_pVB->lockVB())){ //create our lower-left triangle pInterface->m_pVB->addTriToVB(D3DXVECTOR3(-1.0f, -1.0f, -10.0f), 1.0f, 1.0f, 1.0f, 1.0f, D3DXVECTOR2(0.0f, 0.0f)); pInterface->m_pVB->addTriToVB(D3DXVECTOR3(-1.0f, 1.0f, -10.0f), 1.0f, 1.0f, 1.0f, 1.0f, D3DXVECTOR2(0.0f, 1.0f)); pInterface->m_pVB->addTriToVB(D3DXVECTOR3(1.0f, -1.0f, -10.0f), 1.0f, 1.0f, 1.0f, 1.0f, D3DXVECTOR2(1.0f, 0.0f)); //create our upper-right triangle pInterface->m_pVB->addTriToVB(D3DXVECTOR3(1.0f, -1.0f, -10.0f), 1.0f, 1.0f, 1.0f, 1.0f, D3DXVECTOR2(1.0f, 0.0f)); pInterface->m_pVB->addTriToVB(D3DXVECTOR3(-1.0f, 1.0f, -10.0f), 1.0f, 1.0f, 1.0f, 1.0f, D3DXVECTOR2(0.0f, 1.0f)); pInterface->m_pVB->addTriToVB(D3DXVECTOR3(1.0f, 1.0f, -10.0f), 1.0f, 1.0f, 1.0f, 1.0f, D3DXVECTOR2(1.0f, 1.0f)); //we're done with our surface, so unlock the video memory pInterface->m_pVB->unlockVB(); }
Conclusion
Well that's really about it for this tutorial. There's much more to be said for texture mapping obviously, but this is a good enough start to visualize what's happening within our rendering system DLL.

Enjoy those textures!

Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement