Jump to content

  • Log In with Google      Sign In   
  • Create Account

Like
0Likes
Dislike

Graphics API Independence IV: TGA, BMP & RenderStates!

By Erik "Wazoo" Yuzwa | Published Jan 08 2004 05:15 AM in Graphics Programming and Theory

state pixel buffer tga image pipeline we' graphics depth
If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource

Get the source code for this article here.


TGA / BMP Image Loading
Today we get a glimpse of the power of our design. We seperated our game logic code from our rendering interfaces, which gives us a level of abstraction that is easier to (re)use, and easier to maintain. Until now, our engine could only really support BMP image files, as that's all we really implemented within our OpenGL rendering interface DLL.

Today, we'll add TGA support and modify our BMP loading process so that we no longer need to use the (somewhat buggy) glaux library.


TGA Loading
The TGA image format is a popular one, not only because the format allows an alpha channel, but also because there's a lot of source code available on the internet for loading TGA data properly. (Check the References area of this article for some more detailed information, but for now I'll just give a brief overview).

The Alpha channel is most popularly used to help create a way to display your sprite textures without the background of the image itself. In the days before 3d hardware was popular, the programmers accomplished the same thing using a technique called color keying. The theory was that you created a sprite image and put it on a background of a color of your choosing. Then when you loaded up the sprite into your scene, you would only paint the area that is NOT the same color as your background (i.e.. your sprite). Well this same technique applies to our 3d hardware. When we display a texture, we can use the alpha channel to let the hardware know which pixels of the image to display as the texture, and which to ignore. This is also known as alpha testing.

To add support for the new TGA image format, we only need to worry about the OpenGL rendering interface implementation. Because we chose to use the D3DX helper library for our Direct3D rendering interface implementation, it already has TGA support (along with PNG, PCX and a host of other image file formats). I merely added a section into the OGLTextureManager implementation which did a quick comparison of the texture path sent to the addTexture method. If it ends in .BMP then we have a bitmap, likewise if it ends with a .TGA, then we call our TGA loading code.(1) (Note: While I agree that this is perhaps not the most robust way of determining what image type has been passed, I've decided to leave that up to the reader to implement should they choose to do so.)

I'll also quickly mention that I chose to include some different BMP loader code from the MSDN rather than rely on the (quite buggy) glaux library to load up BMP image files within our OpenGL interface implementation.(2)

sImage* pImage = NULL; char *pTemp = (char *)strFilename.c_str(); if(strstr(_strupr(pTemp), ".TGA") != NULL) { // we have a TGA image! pImage = new sTGAImage(); pImage = loadTGA(strFilename); }else if(strstr(_strupr(pTemp), ".BMP") != NULL) { //we have a BMP image! pImage = new sBMPImage(); pImage = loadBMP(strFilename); } if(NULL == pImage) return E_FAIL; //snip..the rest of the code is (nearly) the same So as of this moment, congratulations dear reader! You have not only adjusted the rendering interface code without the requirement to recompile any modules USING our DLL system, but we also can now load TGA image files! Hooray!


The Graphics Pipeline: State Machine
During each phase of the graphics pipeline, the calculations on the resultant pixel displayed to our device can be modified by the current state of the pipeline itself. The graphics pipeline of both OpenGL and Direct3D are known as state machines. For those who have not yet been introduced to the concept, for our purposes we only need to think of a light switch. A light can be either on or off, but never both. (Even if the light bulb flickers on us, it's still pulsing between the on and off state). The same with our graphics pipeline. Once we set a state, it remains set until we change it. Okay okay, please don't feel that I'm treating you the reader as a 3 year old. During my teachings, a lot of my students have come to me with various problems with their output, only to discover that they had forgotten a state they had set during initialization which threw off their scene.

But now we need to focus on a design approach for our graphics states (also known as rendering states in Direct3D lingo), which is not an easy task to tackle. Our goal is to create an addition to our rendererInterface which will give us the power (not to mention ability) to modify the states of our renderer.

The implementation I chose to handle this capability, was to create a few methods within our rendererInterface definition that would enable us to give us a bit of flexibility for defining/adding new states to our engine in the future.

//define a set of states to choose from. Note this is just a starting point enum eState { LIGHTING, //Enable/Disable our lighting ZBUFFER, //Enable/Disable ZBuffer (or depth buffer) ALPHABLENDING, //Enable/Disable alphablending SRC_BLEND, //Set the state of our primary alpha blending parameter DEST_BLEND, //Set the state of our secondary alpha blending parameter ALPHATEST //Enable/Disable alpha testing }; //define a set of operations valid to perform on the //above states..again just a starting point enum eOp { ZERO, ONE, SRC, SRC_ALPHA, INV_SRC, INV_SRC_ALPHA, DEST, DEST_ALPHA, INV_DEST, INV_DEST_ALPHA, ENABLE, DISABLE }; //Finally define our virtual method prototype virtual void setState(const eState, const eOp) = 0; So now that we've defined this method in our base class interface, it's time to implement this virtual method within our Direct3D/OpenGL implementation.

Although it may appear a bit ugly (and it is), I found that it was the easiest method of implementing the above interface definition. Both the Direct3D and OpenGL implementations look fairly similar in vein, so I will only cover the Direct3D approach right here.

/** * My approach for this function is probably not the best way to go, * * but it was the cleanest I could think of. Although it's probably * * totally obvious from looking at the code below, my approach was to * * first determine which state of the graphics pipeline we're modifying,* * then check the Operator for what to change the state TO. I was * * experimenting with other approaches, but for educational purposes, * * I found this one to be better. * */ void D3DRenderer::setState( const eState STATE, const eOp OP ) { //first determine our state to operate on switch(STATE) { //we want to play with the hardware lighting state case LIGHTING: if(OP == ENABLE) { //enable our hardware lighting operations m_lpD3DDevice->SetRenderState( D3DRS_LIGHTING, TRUE ); }else if( OP == DISABLE) { //disable our hardware lighting operations m_lpD3DDevice->SetRenderState( D3DRS_LIGHTING, FALSE ); } break; //we want to play with our Z-buffer (depth buffer) case ZBUFFER: if(OP == ENABLE) { //enable our depth buffer m_lpD3DDevice->SetRenderState( D3DRS_ZENABLE, D3DZB_TRUE ); }else if(OP == DISABLE) { //disable our depth buffer m_lpD3DDevice->SetRenderState( D3DRS_ZENABLE, D3DZB_FALSE ); } break; //we want to play with our blending state and/or operations case ALPHABLENDING: if(OP == ENABLE) { //enable our alpha blending state m_lpD3DDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, TRUE ); }else if(OP == DISABLE) { //disable our alpha blending state m_lpD3DDevice->SetRenderState( D3DRS_ALPHABLENDENABLE, FALSE ); } break; //we want to adjust our primary blending parameter case SRC_BLEND: if(OP == SRC_ALPHA){ m_lpD3DDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA ); } break; //we want to adjust our secondary blending parameter case DEST_BLEND: if(OP == INV_SRC_ALPHA){ m_lpD3DDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); } break; }; } I think you all understand where I'm going with this. We first define a STATE of the pipeline that we wish to modify. Once we pass in our STATE, we then go through our list of available states, and match up which one we wish to affect. Once decided, we then set our new STATE according to the operator (OP) passed in.


The Graphics Pipeline: Buffers
Both OpenGL and Direct3D have some other areas of memory which can be very powerfull if we use them properly. In most documentation, these are referred to as Buffers, and we'll briefly cover them here.

The Z-Buffer (or Depth Buffer) is useful during the HSR (Hidden Surface Removal) phase of the graphics pipeline. Although we'll cover more about the pipeline later in great detail, for now we just need to know that at the last stage of the pipeline before the pixel is rendered to our display, the pipeline performs its HSR algorithm to determine if our pixel is visible or not. One way in which this is accomplished is with our depth buffer. This buffer contains a depth bit for every pixel in the view area, and represents the distance between the pixel and the viewpoint. This is useful, as if you draw several pixels which appear in front of each other, then the depth buffer is updated and during the HSR, all of the pixels which are hidden by the closest one are thrown away.

The Stencil Buffer is another buffer which has some very powerful uses. Like the Depth Buffer that we described above, the stencil buffer is similar in that it can determine which pixels we want to keep in our scene, and which to throw away. However, it's difference is that it is more powerful than our depth buffer, and can accomplish things that are not really possible with our depth buffer. For example, we could use the stencil buffer to help us with a transparent object contained in an opaque structure (i.e. a window). We can see through the transparent object, to the inside/outside of the structure, but we cannot see through the structure's walls. It can also help us greatly with rendering an object's reflection in a mirror or glass surface. In earlier 3d hardware, the stencil buffer performance is not very optimal, so be sure to properly check the hardware to make sure it can handle what you're throwing at it!

Although we can set the state of our graphics pipeline, we also want the opportunity to specify certain comparison functions for our system. We'll need this ability if we wish to flex some of the power of the different hardware buffers available to us via the API. The Stencil Buffer, Z-Buffer (Depth Buffer) and Alpha Buffer are extremely useful and we'll need to be able to modify them.

/** * This enumeration defines the different buffers available to us, and which * one's pixel comparison function we wish to modify */ enum eFunction { ALPHATEST_FUNC, /* modify the alpha-test buffer comparison function */ ZBUFFER_FUNC, /* modify the zbuffer comparison function */ STENCIL_FUNC /* modify the stencil buffer comparion function */ }; /** * This enumeration defines our comparison function to perform on the pixel * of the state defined above */ enum eCompareOp { NEVER, /* never pass the test */ LESS, /* accept the new pixel if the value < the current pixel */ EQUAL, /* accept the new pixel if the value = the current pixel */ LESSEQUAL, /* accept the new pixel if the value <= the current pixel */ GREATER, /* accept the new pixel if the value > the current pixel */ NOTEQUAL, /* accept the new pixel if the value != the current pixel */ GREATEREQUAL, /* accept the new pixel if the value >= the current pixel */ ALWAYS /* always pass the comparison test */ }; /** snip **/ virtual void setFunction(const eFunction, const eCompareOp, DWORD dwRef) = 0; Again we're defining some enumerations to use for our pixel comparison functions. Again, we'll flesh out our approach by showing the Direct3D implementation.

void D3DRenderer::setFunction(const eFunction FUNCTION, const eCompareOp OP, DWORD dwRef) { switch(FUNCTION) { case ALPHATEST_FUNC: switch(OP) { case NEVER: m_lpD3DDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_NEVER); break; case ALWAYS: m_lpD3DDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_ALWAYS); break; case EQUAL: m_lpD3DDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_EQUAL); break; case GREATEREQUAL: m_lpD3DDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_GREATEREQUAL); break; }; m_lpD3DDevice->SetRenderState( D3DRS_ALPHAREF, dwRef ); break; //snip the rest of the function. It's pretty much the same as above
Testing our changes
Well now that we've covered our approach to switching the graphics pipeline state, as well as playing around with our buffer comparison routines, let's do some flexin' and use some of this knowledge in our code!

We're not going to do anything that radical today. I'm simply going to demonstrate some of what we've covered in today's tutorial. Obviously it's pretty bare bones, but the power is there to handle more advanced tasks for your scene.


Step One
In our createProgram method we just need to add/update the following code:

//load up a TGA image hr = pInterface->getTextureInterface()-> addTexture("data\\textures\\title.tga", TITLE); if(FAILED(hr)){ return E_FAIL; } pInterface->setState( ZBUFFER, DISABLE );//disable our depth buffer pInterface->setState( LIGHTING, DISABLE );//disable our lighting //enable our blending calculations pInterface->setState( ALPHABLENDING, ENABLE ); //set our source parameter to the alpha information of the source pixel pInterface->setState( SRC_BLEND, SRC_ALPHA ); //set our destination parameter to 1-alpha information of the source pixel pInterface->setState( DEST_BLEND, INV_SRC_ALPHA ); That's pretty much all we need to do! When we run our example along with the updated interface DLL's, we'll see the following (with our OpenGL DLL):

Posted Image


References
Conclusion
Well there you have it. We did a lot of work today on our engine, and learned a bit about manipulating the state information of the graphics pipeline, as well as being able to update/modify the existing underlying renderer interface codebase without having to touch and/or recompile our application.

Again, note that this approach to our state machine within the graphics pipeline is merely one way to do things.

Another possible approach, is to use the use of bit fields to represent our state(s), which was fleshed out somewhat by gamedev.net reviewer RusselB.

#define ES_LIGHTING 1 // or 0x0001 #define ES_ZBUFFER 2 // or 0x0002 #define ES_ALPHABLENDING 4 // or 0x0004 #define ES_SRC_BLEND 8 // or 0x0008 #define ES_DEST_BLEND 16 // or 0x0010 #define ES_ALPHATEST 32 // or 0x0020 Which can then be used in combination with each other to allow the engine to accept method calls such as:

setState( ES_LIGHTING | ES_ZBUFFER ); //set our state disableState( ES_LIGHTING | ES_ZBUFFER ); //disable our state Any comments, questions or concerns can be sent to Wazoo AT WazooEnterprises death-to-spam DOT com.






Comments

Note: Please offer only positive, constructive comments - we are looking to promote a positive atmosphere where collaboration is valued above all else.




PARTNERS