Graphics and GPU Programming

[b]
(Last modification: March 31st, 2002)[/b]

[size="5"][b]Preface[/b][/size]

During this part of the introduction a very simple program that shows a rotating quad will evolve into a more sophisticated application showing a B?zier patch class with a diffuse and specular reflection model, featuring a point light source. The example applications are all build on each other in a way that most of the code of the previous example is re-used in the following example. This way the explanation of the features stayed focused on the advancements of the specific example.

[size="5"][b]RacorX[/b][/size]

[center][b][attachment=4300:RacorX1.jpg]
Figure 1 - RacorX[/b]
[/center]
RacorX (see attached resource file) displays a green color, that is applied to the quad evenly. This example demonstrate the usage of the common file framework, provided with the DirectX 8.1 SDK and how to compile vertex shaders with the [i]D3DXAssembleShader()[/i] function.

Like with all the upcoming examples, which are based on the Common Files, [lessthan]Alt>+[lessthan]Enter> switches between the windowed and full-screen mode, [lessthan]F2> gives you a selection of the usable drivers and [lessthan]Esc> will shutdown the application.

First let's take a look at the files you need to compile the program:

[center] [b][attachment=4301:directoryRacorX1.jpg]
Figure 2- Directory Content[/b]
[/center]
The source file is [i]RacorX.cpp[/i], the resource files are [i]winmain.rc[/i] and [i]resource.h[/i]. The icon file is [i]directx.ico[/i] and the executable is [i]RacorX.exe[/i]. The remaining files are for the use of the Visual C/C++ 6 IDE.

To compile this example, you should link it with the following *.lib files:
[list][*]d3d8.lib[*]d3dx8dt.lib[*]dxguid.lib[*]d3dxof.lib[*]winmm.lib[*]gdi32.lib[*]user32.lib[*]kernel32.lib[*]advapi32.lib[/list] Most of these *.lib files are COM wrappers. The d3dx8dt.lib is the debug version of the Direct3DX static link library.
[indent][bquote][i]The release Direct3DX static link library is called d3dx8.lib. There is also a *.dll version of the debug build called d3dx8d.dll in the system32 directory. It is used by linking to the d3dx8d.lib COM wrapper.[/i][/bquote][/indent] All of these *.lib files have to be included in the [i]<[/i]Object/libary modules:> entry field. This is located at [lessthan]Project->Settings> and there under the [lessthan]Link> tab:

Figure 3 - Project Settings[/b]
[/center]
The provided Visual C/C++ 6 IDE workspace references the common files in a folder with the same name:

[center] [b][attachment=4303:workspacewithcommo.jpg]
Figure 4 - Workspace[/b]
[/center]

Figure 5 - Add Files to Project[/b]
[/center]

[size="3"][b]The Common Files Framework[/b][/size]

The common files framework helps getting up to speed, because:
[list][*]It helps to avoid how-tos for Direct3D in general, so that the focus of this text is the real stuff.[*]It's common and tested foundation, which helps reduce the debug time.[*]All of the Direct3D samples in the DirectX SDK use it. Learning time is very short.[*]Its window mode makes debugging easier.[*]Self-developed production code could be based on the common files, so knowing them is always a win.[/list] A high-level view of the Common Files shows 14 *.cpp files in

[font="Courier New"][color="#000080"] C:\DXSDK\samples\Multimedia\Common\src[/color][/font]

These files encapsulate the basic functionality you need to start programming a Direct3D application. The most important d3dapp.cpp contains the class CD3DApplication. It provides seven functions that can be overridden and that are used in the main *.cpp file of any project in this introduction:
[list][*]virtual HRESULT OneTimeSceneInit() { return S_OK; }[*]virtual HRESULT InitDeviceObjects() { return S_OK; }[*]virtual HRESULT RestoreDeviceObjects() { return S_OK; }[*]virtual HRESULT DeleteDeviceObjects() { return S_OK; }[*]virtual HRESULT Render() { return S_OK; }[*]virtual HRESULT FrameMove( FLOAT ) { return S_OK; }[*]virtual HRESULT FinalCleanup() { return S_OK; }[/list] All that has to be done, to create an application based on this framework code is to create a new project and new implementations of these overridable functions in the main source file. This is also shown in all Direct3D examples in the DirectX SDK.

RacorX uses these framework functions in RacorX.cpp. They can be called the public interface of the common files framework.

[center] [b][attachment=4305:commonfilesovervie.jpg]
Figure 6 - Framework Public Interface[/b]
[/center]
The following functions are called in the following order in racorx.cpp at startup:
[list][*]ConfirmDevice()[*]OneTimeSceneInit()[*]InitDeviceObjects()[*]RestoreDeviceObjects()[/list] Now the application is running. While it is running, the framework calls
[list][*]FrameMove()[*]Render()[/list] in a loop.

If the user resizes the window, the framework will call
[list][*]InvalidateDeviceObjects()[*]RestoreDeviceObjects()[/list] If the user presses F2 or clicks [lessthan]File>->[lessthan]Change device> and changes the device by choosing for example another resolution or color quality, the framework will call
[list][*]InvalidateDeviceObjects()[*]DeleteDeviceObjects()[*]InitDeviceObjects()[*]RestoreDeviceObjects()[/list] If the user quits the application, the framework will call
[list][*]InvalidateDeviceObjects()[*]DeleteDeviceObjects()[*]FinalCleanup()[/list] There are matching functional pairs. [i]InvalidateDeviceObjects()[/i] destroys what [i]RestoreDeviceObjects()[/i] has build up and [i]DeleteDeviceObjects()[/i] destroys what [i]InitDeviceObjects()[/i] has build up. [i]The FinalCleanup()[/i] function destroys what [i]OneTimeSceneInit()[/i] build up.

The idea is to give every functional pair its own tasks. The [i]OneTimeSceneInit()[/i] / [i]FinalCleanup()[/i] pair is called once at the beginning and the end of a life-cycle of the game. Both are used to load or delete data, which is not device dependant. A good candidate might be geometry data. The target of the [i]InitDeviceObjects()[/i] / [i]DeleteDeviceObjects()[/i] pair is, like the name implies, data that is device dependant. If the already loaded data has to be changed, when the device changes, it should be loaded here. The following examples will load, re-create or destroy their vertex buffer and index buffers and their textures in these functions.

The [i]InvalidateDeviceObjects()[/i] / [i]RestoreDeviceObjects()[/i] pair has to react on changes of the window size. So for example code that handles the projection matrix might be placed here. Additionally the following examples will set most of the render states in [i]RestoreDeviceObjects()[/i].

Now back to RacorX. Like shown in part 1 of this introduction, the following list tracks the life-cycle of a vertex shader:

The supported vertex shader version is checked in [i]ConfirmDevice()[/i] in racorx.cpp:

[indent][code]HRESULT CMyD3DApplication::ConfirmDevice( D3DCAPS8* pCaps,
DWORD dwBehavior,
D3DFORMAT Format )
{
if( (dwBehavior & D3DCREATE_HARDWARE_VERTEXPROCESSING ) ||
(dwBehavior & D3DCREATE_MIXED_VERTEXPROCESSING ) )
{
return E_FAIL;
}
return S_OK;
}[/code][/indent] If the framework has already initialized hardware or mixed vertex processing, the vertex shader version will be checked. If the framework initialized software vertex processing, the software-implementation provided by Intel and AMD jumps in and a check of the hardware capabilities is not needed.

The globally available [i]pCaps[/i] capability data structure is filled with a call to [i]GetDeviceCaps()[/i] by the framework. [i]pCaps->VertexShaderVersion[/i] holds the vertex shader version in a DWORD. The macro [i]D3DVS_VERSION[/i] helps checking the version number. For example the support of at least vs.2.0 in hardware will be checked with [i]D3DVS_VERSION(2,0)[/i].

After checking the hardware capabilities for vertex shader support, the vertex shader has to be declared.

Declaring a vertex shader means mapping vertex data to specific vertex shader input registers, therefore the vertex shader declaration must reflect the vertex buffer layout, because the vertex buffer must transport the vertex data in the correct order. The one used in this example program is very simple. The vertex shader will get the position data via [i]v0[/i].

DWORD dwDecl[] =
{
D3DVSD_STREAM(0),
D3DVSD_REG(0, D3DVSDT_FLOAT3 ), // D3DVSDE_POSITION,0
D3DVSD_END()
};

The corresponding layout of the vertex buffer looks like this:

// vertex type
struct VERTEX
{
FLOAT x, y, z; // The untransformed position for the vertex
};

// Declare custom FVF macro.
#define D3DFVF_VERTEX (D3DFVF_XYZ)[/code][/indent] The position values will be stored in the vertex buffer and bound through the [i]SetStreamSource()[/i] function to a device data stream port, that feed the primitive processing functions (this is the Higher-Order Surfaces (HOS) stage or directly the vertex shader, depending on the usage of HOS; see the Direct3D pipeline in part 1).

We do not use vertex color here, so no color values are declared.

[size="3"][b]Setting the Vertex Shader Constant Registers[/b][/size]

The vertex shader constant registers have to be filled with a call to [i]SetVertexShaderConstant()[/i]. We set the material color in [i]RestoreDeviceObjects()[/i] in [i]c8[/i] in this example:

[indent][code]// set material color
FLOAT fMaterial[4] = {0,1,0,0};

CONST void* pConstantData,
DWORD ConstantCount);[/code][/indent] The first parameter provides the number of the constant register that should be used. In this case [i]8[/i]. The second parameter stores the 128bit data in that constant register and the third parameter gives you the possibility to use the following registers as well. A 4x4 matrix can be stored with one [i]SetVertexShaderConstant()[/i] call by providing the number four in [i]ConstantCount[/i]. This is done for the clipping matrix in [i]FrameMove()[/i]:

[indent][code]// set the clip matrix
...
m_pd3dDevice->SetVertexShaderConstant(4, matTemp, 4);[/code][/indent] This way the [i]c4[/i], [i]c5[/i], [i]c6[/i] and [i]c7[/i] registers are used to store the matrix.

The vertex shader that is used by RacorX is very simple:

[indent][code]// reg c4-7 = WorldViewProj matrix
// reg c8 = constant color
// reg v0 = input register
"dp4 oPos.x, v0, c4 //emit projected position \n"\
"dp4 oPos.y, v0, c5 //emit projected position \n"\
"dp4 oPos.z, v0, c6 //emit projected position \n"\
"dp4 oPos.w, v0, c7 //emit projected position \n"\
"mov oD0, c8 //material color = c8 \n";[/code][/indent] It is used inline in a constant char array in RacorX.cpp. This vertex shader incorporates the vs.1.1 vertex shader implementation rules. It transforms from the concatenated and transposed world-, view- and projection-matrix to the clip matrix or clip space with the four [i]dp4[/i] instructions and kicks out into [i]oD0[/i] a green material color with [i]mov[/i].

As shown above, the values of the [i]c4[/i] - [i]c7[/i] constant registers are set in [i]FrameMove()[/i]. These values are calculated by the following code snippet:

[indent][code]// rotates the object about the y-axis
D3DXMatrixRotationY( &m_matWorld, m_fTime * 1.5f );

// set the clip matrix
D3DXMATRIX matTemp;
D3DXMatrixTranspose( &matTemp , &(m_matWorld * m_matView * m_matProj) );
m_pd3dDevice->SetVertexShaderConstant(4, matTemp, 4);[/code][/indent] First the quad is rotated around the y-axis by the [i]D3DMatrixRotationY()[/i] call, then the concatenated matrix is transposed and then stored in the constant registers [i]c4 - c7[/i]. The source of the [i]D3DMatrixRotationY()[/i] function might look like:

[indent][code]VOID D3DMatrixRotationY(D3DMATRIX * mat, FLOAT fRads)
{
D3DXMatrixIdentity(mat);
}
=
0 0 0 0
0 0 0 0[/code][/indent] So [i]fRads[/i] equals the amount you want to rotate about the y-axis. After changing the values of the matrix this way, we transpose the matrix by using [i]D3DXMatrixTranspose()[/i], so that its columns are stored as rows. Why do we have to transpose the matrix?

A 4x4 matrix looks like this:
[indent][bquote][font="Courier New"][color="#000080"]a b c d
e f g h
i j k l
m n o p[/color][/font][/bquote][/indent] The formula for transforming a vector (v0) through the matrix is:

dest.x = (v0.x * a) + (v0.y * e) + (v0.z * i) + (v0.w * m) dest.y = (v0.x * b) + (v0.y * f) + (v0.z * j) + (v0.w * n) dest.z = (v0.x * c) + (v0.y * g) + (v0.z * k) + (v0.w * o) dest.w = (v0.x * d) + (v0.y * h) + (v0.z * l) + (v0.w * p)

So each column of the matrix should be multiplied with each component of the vector. Our vertex shader uses four dp4 instructions:

[indent][code]dest.w = (src1.x * src2.x) + (src1.y * src2.y) +
(src1.z * src2.z) + (src1.w * src2.w)
dest.x = dest.y = dest.z = unused[/code][/indent] The [i]dp4[/i] instructions multiplies a row of a matrix with each component of the vector. Without transposing we would end up with:

[indent][code]dest.x = (v0.x * a) + (v0.y * b) + (v0.z * c) + (v0.w * d)
dest.y = (v0.x * e) + (v0.y * f) + (v0.z * g) + (v0.w * h)
dest.z = (v0.x * i) + (v0.y * j) + (v0.z * k) + (v0.w * l)
dest.w = (v0.x * m) + (v0.y * n) + (v0.z * o) + (v0.w * p)[/code][/indent] which is wrong. By transposing the matrix it looks like this in constant memory:
[indent][bquote][font="Courier New"][color="#000080"]a e i m
b f j n
c g k o
d h l p[/color][/font][/bquote][/indent] so the 4 [i]dp4[/i] operations would now yield:

[indent][code]dest.x = (v0.x * a) + (v0.y * e) + (v0.z * i) + (v0.w * m)
dest.y = (v0.x * b) + (v0.y * f) + (v0.z * j) + (v0.w * n)
dest.z = (v0.x * c) + (v0.y * g) + (v0.z * k) + (v0.w * o)
dest.w = (v0.x * d) + (v0.y * h) + (v0.z * l) + (v0.w * p)[/code][/indent] or

[indent][code]oPos.x = (v0.x * c4.x) + (v0.y * c4.y) + (v0.z * c4.z) + (v0.w * c4.w)
oPos.y = (v0.x * c5.x) + (v0.y * c5.y) + (v0.z * c5.z) + (v0.w * c5.w)
oPos.z = (v0.x * c6.x) + (v0.y * c6.y) + (v0.z * c6.z) + (v0.w * c6.w)
oPos.w = (v0.x * c7.x) + (v0.y * c7.y) + (v0.z * c7.z) + (v0.w * c7.w)[/code][/indent] which is exactly how the vector transformation should work.

[i]dp4[/i] gets the matrix values via the constant register [i]c4 - c7[/i] and the vertex position via the input register [i]v0[/i]. Temporary registers are not used in this example. The dot product of the [i]dp4[/i] instructions is written to the [i]oPos[/i] output register and the value of the constant register [i]c8[/i] is moved into the output register [i]oD0[/i], that is usually used to output diffuse color values.

The vertex shader that is stored in a char array is compiled with a call to the following code snippet in [i]RestoreDeviceObjects()[/i]:

0 , NULL , &pVS , &pErrors );
if ( FAILED(rc) )
{
OutputDebugString( "Failed to assemble the vertex shader, errors:\n" );
OutputDebugString( (char*)pErrors->GetBufferPointer() );
OutputDebugString( "\n" );
}[/code][/indent] [i]D3DXAssembleShader()[/i] creates a binary version of the shader in a buffer object via the [i]ID3DXBuffer[/i] interface in [i]pVS[/i].

LPCVOID pSrcData,
UINT SrcDataLen,
DWORD Flags,
LPD3DXBUFFER* ppConstants,
LPD3DXBUFFER* ppCompilationErrors
);[/code][/indent] The source data is provided in the first parameter and the size of the data length in bytes is provided in the second parameter. There are two possible flags for the third parameter called

[indent][code]#define D3DXASM_DEBUG 1
#define D3DXASM_SKIPVALIDATION 2[/code][/indent] The first one inserts debug info as comments into the shader and the second one skips validation. This flag can be set for a working shader.

Via the fourth parameter a ID3DXBuffer interface can be exported, to get a vertex shader declaration fragment of the constants. To ignore this parameter, it is set to NULL here. In case of an error, the error explanation would be stored in a buffer object via the [i]ID3DXBuffer[/i] interface in [i]pErrors[/i]. To see the output of [i]OutputDebugString()[/i] the debug process in the Visual C/C++ IDE must be started with [lessthan]F5>.

The vertex shader is validated and a handle for it is retrieved via a call to [i]CreateVertexShader()[/i] in [i]m_dwVertexShader[/i]: The following lines of code can be found in [i]RestoreDeviceObjects()[/i]:

if ( FAILED(rc) )
{
OutputDebugString( "Failed to create the vertex shader, errors:\n" );
D3DXGetErrorStringA(rc,szBuffer,sizeof(szBuffer));
OutputDebugString( szBuffer );
OutputDebugString( "\n" );
}[/code][/indent] [i]CreateVertexShader()[/i] gets a pointer to the buffer with the binary version of the vertex shader via the [i]ID3DXBuffer[/i] interface. This function gets the vertex shader declaration via [i]dwDecl[/i], that maps vertex data to specific vertex shader input registers. If an error occurs, its explanation is accessible via a pointer to a buffer object that is retrieved via the [i]ID3DXBuffer[/i] interface in [i]pVS->GetBufferPointer()[/i]. [i]D3DXGetErrorStringA()[/i] interprets all Direct3D and Direct3DX HRESULTS and returns an error message in [i]szBuffer[/i].

It is possible to force the usage of software vertex processing with the last parameter by using the D3DUSAGE_SOFTWAREPROCESSING flag. It must be used when the D3DRS_SOFTWAREVERTEXPROCESSING member of the D3DRENDERSTATETYPE enumerated type is TRUE.

m_pd3dDevice->SetVertexShader( m_dwVertexShader );[/code][/indent] The only parameter that must be provided is the handle to the vertex shader. This function executes the vertex shader as often as there are vertices.

Vertex shader resources must be freed with a call to

[indent][code]if ( m_dwVertexShader != 0xffffffff )
{
}[/code][/indent] This example frees the vertex shader resources in the [i]InvalidateDeviceObjects()[/i] framework function, because this has to happen in case of a change of the window size or a device.

The non-shader specific code of RacorX deals with setting render states and the handling of the vertex and index buffer. A few render states have to be set in [i]RestoreDeviceObjects()[/i]:

[indent][code]// z-buffer enabled
m_pd3dDevice->SetRenderState( D3DRS_ZENABLE, TRUE );

// Turn off D3D lighting, since we are providing our own vertex shader lighting
m_pd3dDevice->SetRenderState( D3DRS_LIGHTING, FALSE );

// Turn off culling, so we see the front and back of the quad
m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE );[/code][/indent] The first instructions enables the z-buffer (a corresponded flag has to be set in the constructor of the Direct3D framework class, so that the device is created with a z-buffer).

The fixed-function lighting is not needed, so it is switched off with the second statement. To be able to see both sides of the quad, backface culling is switched off with the third statement.

The vertex and index buffer is created in [i]InitDeviceObjects():[/i]

[indent][code]// create and fill the vertex buffer
// Initialize vertices for rendering a quad
VERTEX Vertices[] =
{
// x y z
{ -1.0f,-1.0f, 0.0f, },
{ 1.0f,-1.0f, 0.0f, },
{ 1.0f, 1.0f, 0.0f, },
{ -1.0f, 1.0f, 0.0f, },
};

m_dwSizeofVertices = sizeof (Vertices);

// Create the vertex buffers with four vertices
if( FAILED( m_pd3dDevice->CreateVertexBuffer( 4 * sizeof(VERTEX),
D3DUSAGE_WRITEONLY , sizeof(VERTEX), D3DPOOL_MANAGED, &m_pVB ) ) )
return E_FAIL;

// lock and unlock the vertex buffer to fill it with memcpy
VOID* pVertices;
if( FAILED( m_pVB->Lock( 0, m_dwSizeofVertices, (BYTE**)&pVertices, 0 ) ) )
return E_FAIL;
memcpy( pVertices, Vertices, m_dwSizeofVertices);
m_pVB->Unlock();

// create and fill the index buffer
// indices
WORD wIndices[]={0, 1, 2, 0, 2, 3};

m_wSizeofIndices = sizeof (wIndices);

// create index buffer
if(FAILED (m_pd3dDevice->CreateIndexBuffer(m_wSizeofIndices, 0,
D3DFMT_INDEX16, D3DPOOL_MANAGED, &m_pIB)))
return E_FAIL;

// fill index buffer
VOID *pIndices;
if (FAILED(m_pIB->Lock(0, m_wSizeofIndices, (BYTE **)&pIndices, 0)))
return E_FAIL;
memcpy(pIndices, wIndices, m_wSizeofIndices);
m_pIB->Unlock();[/code][/indent] The four vertices of the quad are stored in a [i]VERTEX[/i] structure, which holds for each vertex three [i]FLOAT[/i] values for the position.

By using the flag [i]D3DFMT_INDEX16[/i] in [i]CreateIndexBuffer()[/i], 16-bit variables are used to store the indices into the [i]wIndices[/i] structure. So the maximum number of available indices are 64 k. Both buffers use a managed memory pool with [i]D3DPOOL_MANAGED[/i], so they will be cached in the system memory.
[indent][bquote][i] D3DPOOL_MANAGED resources are read from the system memory which is quite fast and they are written to the system memory and afterwards uploaded to wherever the non-system copy has to go (AGP or VIDEO memory). This upload happens when the resource is unlocked. So there are always two copies of a resource, one in the system and one in the AGP or VIDEO memory. This is a less efficient but bullet-proof way. It works for any class of driver and must be used with unified memory architecture boards. Handling resources with D3DPOOL_DEFAULT is more efficient. In this case the driver will choose the best place for the resource.

Why do we use a vertex buffer at all ? The vertex buffer can be stored in the memory of your graphic card or AGP, where it can be accessed very quickly by 3-D hardware. So a copy between system memory and the graphic card/AGP memory could be avoided. This is important for hardware that accelerates transformation and lighting. Without vertex buffers a lot of bus-traffic would happen by transforming and lighting the vertices.

Why do we use an index buffer ? You will get the maximum performance when you reduce the duplication in vertices transformed and sent across the bus to the rendering device. A nonindexed triangle list for example achieves no vertex sharing, so it's the least optimal method, because DrawPrimitive*() is called several times. Using indexed lists or strips reduce the call overhead of DrawPrimitive*() methods (Reducing DrawPrimitve*() methods is also called batching) and because of the reduction of vertices to send through the bus, it saves memory bandwidth. Indexed strips are more hardware-cache friendly on newer hardware than indexed lists. The performance of index processing operations depends heavily on where the index buffer exists in memory. At the time of this writing, the only graphic cards that supports index buffers in hardware are the RADEON 8x00 series.[/i][/bquote]
[/indent] [size="3"][b]Summarize[/b][/size]

RacorX shows a simple vertex shader together with its infrastructure. The shader is inlined in racorx.cpp and compiled with [i]D3DXAssembleShader()[/i]. It uses four [i]dp4[/i] instructions for the transformation of the quad and only one material color.

The upcoming examples are build on this example and only the functional additions will be shown on the next pages.

[size="5"][b]RacorX2[/b][/size]

The main difference between RacorX and RacorX2 (see attached resource file) is the compilation of the vertex shader with NVASM. Whereas the first example compiles the vertex shader with [i]D3DXAssembleShader()[/i] while the application starts up, RacorX2 uses a pre-compiled vertex shader.

To add the NVIDIA vertex and pixel shader assembler, you have to do the following steps:
[list][*]Create a directory, for example [lessthan]C:\NVASM>: and unzip nvasm.exe and the documentation into it[*]Show your Visual C++ IDE the path to this exe with
[lessthan]Tools->Options->Directories>
and choose from the drop down menu [lessthan]Show directories for:>
[lessthan]Executable files>
[lessthan]C:\NVASM>
Now the dialog box should look like this:
[center][b]
[attachment=4306:OptionsExepath.jpg]
Figure 7 - Integrating NVASM into Visual C/C++[/b][/center][*]Additionally you have to tell every vertex shader file in the IDE, that it has to be compiled with NVASM. The easiest way to do that is looking into the example RacorX2. Just fire up your Visual C++ IDE by clicking on the [lessthan]RacorX.dsp> in the RacorX2 directory. Click on the [lessthan]FileView> tab of the Workspace dialog and there on [lessthan]Shaders> to view the available shader files. A right-click on the file [lessthan]basic.vsh> should show you a popup. Click on [lessthan]Settings...>. The project settings of your project might look like:
[center][b]
[attachment=4307:NVASMoptions.jpg]
Figure 8 - Custom Build Options Vertex Shader Files[/b][/center][*]The entry in the entry field called "Commands" is:

nvasm.exe $(InputPath) shaders\$(InputName).vso

For the entry field named "Outputs", I use the input name as the name for the output file with an *.vso extension. The output directory should be the shaders directory. ShaderX author Kenneth Hurley is the author of NVASM. Read more in his paper [Hurley].[/list] The output of NVASM in the build window of your Visual C/C++ IDE should look like this:

[center] [b][attachment=4308:IDEoutputNVASM.jpg]
Figure 9 - NVASM Output[/b]
[/center]
The vertex shader is provided in its own ASCII file called basic.vsh. After compilation a binary object file with the name basic.vso is created in the directory [lessthan]shaders>:

Figure 10 - Directory Content RacorX2[/b]
[/center]

Because of the already compiled vertex shader, the creation of the vertex shader has to be done in a different way than in the previous example. This happens in the [i]InitDeviceObjects()[/i] function:

DWORD dwDecl[] =
{
D3DVSD_STREAM(0),
D3DVSD_REG(0, D3DVSDT_FLOAT3 ), // input register 0
D3DVSD_END()
};

return E_FAIL;[/code][/indent] [i]CreateVSFromCompiledFile()[/i] opens and reads in the binary vertex shader file and creates a vertex shader. The source of this function can be found at the end of the file racorx.cpp in the RacorX2 directory:

[indent][code]//----------------------------------------------------------------------------
// Name: CreateVSFromBinFile
// Desc: loads a binary *.vso file that was compiled by NVASM
// and creates a vertex shader
//----------------------------------------------------------------------------
HRESULT CMyD3DApplication::CreateVSFromCompiledFile (IDirect3DDevice8* m_pd3dDevice,
DWORD* dwDeclaration,
TCHAR* strVSPath,
DWORD* m_dwVS)
{
char szBuffer[128]; // debug output
DWORD* dwpVS; // pointer to address space of the calling process
HANDLE hFile, hMap; // handle file and handle mapped file
TCHAR tempVSPath[512]; // temporary file path
HRESULT hr; // error

if( FAILED( hr = DXUtil_FindMediaFile( tempVSPath, strVSPath ) ) )
return D3DAPPERR_MEDIANOTFOUND;

FILE_ATTRIBUTE_NORMAL,0);

if(hFile != INVALID_HANDLE_VALUE)
{
if(GetFileSize(hFile,0) > 0)
else
{
CloseHandle(hFile);
return E_FAIL;
}
}
else
return E_FAIL;

// maps a view of a file into the address space of the calling process
dwpVS = (DWORD *)MapViewOfFile(hMap, FILE_MAP_READ, 0, 0, 0);

hr = m_pd3dDevice->CreateVertexShader( dwDeclaration, dwpVS, m_dwVS, 0 );
if ( FAILED(hr) )
{
OutputDebugString( "Failed to create Vertex Shader, errors:\n" );
D3DXGetErrorStringA(hr,szBuffer,sizeof(szBuffer));
OutputDebugString( szBuffer );
OutputDebugString( "\n" );
return hr;
}

UnmapViewOfFile(dwpVS);
CloseHandle(hMap);
CloseHandle(hFile);

return S_OK;
}[/code][/indent] [i]DXUtil_FindMediaFile()[/i], a helper function located in the framework file dxutil.cpp, returns the path to the already compiled vertex shader file. [i]CreateFile()[/i] opens and reads the existing file:

[indent][code]HANDLE CreateFile(
LPCTSTR lpFileName, // file name
DWORD dwDesiredAccess, // access mode
DWORD dwShareMode, // share mode
LPSECURITY_ATTRIBUTES lpSecurityAttributes, // SD
DWORD dwCreationDisposition, // how to create
DWORD dwFlagsAndAttributes, // file attributes
HANDLE hTemplateFile // handle to template file
);[/code][/indent] Its first parameter is the path to the file. The flag [i]GENERIC_READ[/i] specifies read access to the file in the second parameter. The following two parameters are not used, because file sharing should not happen and the fourth parameter is not used, because the file should not be inherited by a child process. The fifth parameter is set to [i]OPEN_EXISTING[/i]. This way, the function fails if the file does not exist. Setting the sixt parameter to [i]FILE_ATTRIBUTE_NORMAL[/i] indicates, that the file has no other attributes. A template file is not used here, so the last parameter is set to 0. Please consult the Platform SDK help file for more information.

[i]CreateFileMapping()[/i] creates or opens a named or unnamed file-mapping object for the specified file:

[indent][code]HANDLE CreateFileMapping(
HANDLE hFile, // handle to file
LPSECURITY_ATTRIBUTES lpAttributes, // security
DWORD flProtect, // protection
DWORD dwMaximumSizeHigh, // high-order DWORD of size
DWORD dwMaximumSizeLow, // low-order DWORD of size
LPCTSTR lpName // object name
);[/code][/indent] The first parameter is a handle to the file from which to create a mapping object. The file must be opened with an access mode compatible with the protection flags specified by the [i]flProtect[/i] parameter. We have opened the file in [i]CreateFile()[/i] with [i]GENERIC_READ[/i], therefore we use [i]PAGE_READONLY[/i] here. Other features of [i]CreateFileMapping()[/i] are not needed, therefore we set the rest of the parameters to 0.

[i]MapViewOfFile()[/i] function maps a view of a file into the address space of the calling process:

[indent][code]LPVOID MapViewOfFile(
HANDLE hFileMappingObject, // handle to file-mapping object
DWORD dwDesiredAccess, // access mode
DWORD dwFileOffsetHigh, // high-order DWORD of offset
DWORD dwFileOffsetLow, // low-order DWORD of offset
SIZE_T dwNumberOfBytesToMap // number of bytes to map
);[/code][/indent] This function only gets the handle to the file-mapping object from [i]CreateFileMapping()[/i] and in the second parameter the access mode [i]FILE_MAP_READ[/i]. The access mode parameter specifies the type of access to the file view and, therefore, the protection of the pages mapped by the file. More features are not needed, therefore the rest of the parameters are set to 0.

[i]CreateVertexShader()[/i] is used to create and validate a vertex shader. It takes the vertex shader declaration (which maps vertex buffer streams to different vertex input registers) in its first parameter as a pointer and returns the shader handle in the third parameter. The second parameter gets the vertex shader instructions of the binary code pre-compiled by a vertex shader assembler. With the fourth parameter you can force software vertex processing with [i]D3DUSAGE_SOFTWAREPROCESSING[/i].

As in the previous example [i]OutputDebugString()[/i] shows the complete error message in the output debug window of the Visual C/C++ IDE and [i]D3DXGetErrorStringA()[/i] interprets all Direct3D and Direct3DX HRESULTS and returns an error message in [i]szBuffer[/i].

[size="3"][b]Summarize[/b][/size]

This example showed the integration of NVASM to pre-compile a vertex shader and how to open and read a binary vertex shader file.

[size="5"][b]RacorX3[/b][/size]

The main improvement of RacorX3 (see attached resource file) over RacorX2 is the addition of a per-vertex diffuse reflection model in the vertex shader. This is one of the simplest lighting calculations, which outputs the color based on the dot product of the vertex normal with the light vector.

RacorX3 uses a light positioned at (0.0, 0.0, 1.0) and a green color.

[center] [b][attachment=4310:RacorX3.jpg]
Figure 11 - RacorX3[/b]
[/center]
As usual we are tracking the life-cycle of the vertex shader.

The vertex shader declaration has to map vertex data to specific vertex shader input registers. Additionally to the previous examples, we need to map a normal vector to the input register [i]v3[/i]:

DWORD dwDecl[] =
{
D3DVSD_STREAM(0),
D3DVSD_REG(0, D3DVSDT_FLOAT3 ), // input register #1
D3DVSD_REG(3, D3DVSDT_FLOAT3 ), // normal in input register #4
D3DVSD_END()
};[/code][/indent] The corresponding layout of the vertex buffer looks like this:

[indent][code]struct VERTICES
{
FLOAT x, y, z; // The untransformed position for the vertex
FLOAT nx, ny, nz; // the normal
};

// Declare custom FVF macro.
#define D3DFVF_VERTEX (D3DFVF_XYZ|D3DFVF_NORMAL)[/code][/indent] Each vertex consists of three position floating point values and three normal floating point values in the vertex buffer. The vertex shader gets the position and normal values from the vertex buffer via [i]v0[/i] and [i]v3[/i].

[size="3"][b]Setting the Vertex Shader Constant Registers[/b][/size]

The vertex shader constants are set in [i]FrameMove()[/i] and [i]RestoreDeviceObjects()[/i]. This example uses a more elegant way to handle the constant registers. The file const.h that is included in racorx.cpp and diffuse.vsh, gives the constant registers an easier to remember name:

[indent][code]#define CLIP_MATRIX 0
#define CLIP_MATRIX_1 1
#define CLIP_MATRIX_2 2
#define CLIP_MATRIX_3 3

#define INVERSE_WORLD_MATRIX 4
#define INVERSE_WORLD_MATRIX_1 5
#define INVERSE_WORLD_MATRIX_2 6

#define LIGHT_POSITION 11

#define DIFFUSE_COLOR 14
#define LIGHT_COLOR 15[/code][/indent] In [i]FrameMove()[/i] a clipping matrix and an inversed world matrix are set into the constant registers:

[indent][code]HRESULT CMyD3DApplication::FrameMove()
{
// rotates the object about the y-axis
D3DXMatrixRotationY( &m_matWorld, m_fTime * 1.5f );

// set the clip matrix
m_matView * m_matProj), 4);
D3DXMATRIX matWorldInverse;
D3DXMatrixInverse(&matWorldInverse, NULL, &m_matWorld);
&matWorldInverse,3);

return S_OK;
} [/code][/indent] Contrary to the previous examples, the concatenated world-, view- and projection matrix, which is used to rotate the quad, is not transposed here. This is because the matrix will be transposed in the vertex shader as shown below.

To transform the normal, an inverse 4x3 matrix is send to the vertex shader via [i]c4[/i] -[i]c6[/i].

The vertex shader is a little bit more complex, than the one used in the previous examples:

[indent][code]; per-vertex diffuse lighting

#include "const.h"

vs.1.1
; transpose and transform to clip space
mul r0, v0.x, c[CLIP_MATRIX]

; transform normal
dp3 r1.x, v3, c[INVERSE_WORLD_MATRIX]
dp3 r1.y, v3, c[INVERSE_WORLD_MATRIX_1]
dp3 r1.z, v3, c[INVERSE_WORLD_MATRIX_2]

; renormalize it
dp3 r1.w, r1, r1
rsq r1.w, r1.w
mul r1, r1, r1.w

; N dot L
; we need L vector towards the light, thus negate sign
dp3 r0, r1, -c[LIGHT_POSITION]

mul r0, r0, c[LIGHT_COLOR] ; modulate against light color
mul oD0, r0, c[DIFFUSE_COLOR] ; modulate against material[/code][/indent] The [i]mul[/i], [i]mad[/i] and [i]add[/i] instructions transpose and transform the matrix provided in [i]c0[/i] - [i]c3[/i] to clip space. As such they are nearly functionally equivalent to the transposition of the matrix and the four [i]dp4[/i] instructions shown in the previous examples. There are two caveats to bear in mind: The complex matrix instructions like [i]m4x4[/i] might be faster in software emulation mode and [i]v0.w[/i] is not used here. [i]oPos.w[/i] is automatically filled with 1. These instructions should save the CPU cycles used for transposing.

The normals are transformed in the following three [i]dp3[/i] instructions and then renormalized with the [i]dp3[/i], [i]rsq[/i] and [i]mul[/i] instructions.

You can think of a normal transform in the following way: Normal vectors (unlike position vectors) are simply directions in space, and as such they should not get squished in magnitude, and translation doesn't change their direction. They should simply be rotated in some fashion to reflect the change in orientation of the surface. This change in orientation is a result of rotating and squishing the object, but not moving it. The information for rotating a normal can be extracted from the 4x4 transformation matrix by doing transpose and inversion. A more math-related explanation is given in [Haines/M?ller][Turkowski].

So the bullet-proof way to use normals, is to transform the transpose of the inverse of the matrix, that is used to transform the object. If the matrix used to transform the object is called M, then we must use the matrix, N, below to transform the normals of this object.
[indent][bquote][font="Courier New"][color="#000080"]N = transpose( inverse(M) )[/color][/font][i]

The normal can be transformed with the transformation matrix (usually the world matrix), that is used to transform the object in the following cases:[/i]
[list][*][i]Matrix formed from rotations (orthogonal matrix), because the inverse of an orthogonal matrix is its transpose[/i][*][i]Matrix formed from rotations and translation (rigid-body transforms), because translations do not affect vector direction[/i][*][i]Matrix formed from rotations and translation and uniform scalings, because such scalings affect only the length of the transformed normal, not its direction. A uniform scaling is simply a matrix which uniformly increases or decreases the object's size, vs. a non-uniform scaling, which can stretch or squeeze an object. If uniform scalings are used, then the normals do have to be renormalized.[/i][/list][i] Therefore using the world matrix would be sufficient in this example.[/i][/bquote] [/indent] That's exactly, what the source is doing. The inverse world matrix is delivered to the vertex shader via [i]c4[/i] - [i]c6[/i]. The [i]dp3[/i] instruction handles the matrix in a similar way as [i]dp4[/i].

By multiplying a matrix with a vector, each column of the matrix should be multiplied with each component of the vector. [i]dp3[/i] and [i]dp4[/i] are only capable to multiply each row of the matrix with each component of the vector. In case of the position data, the matrix is transposed to get the right results.

In case of the normals, no transposition is done. So [i]dp3[/i] calculates the dot product by multiplying the rows of the matrix with the components of the vector. This is like using a transposed matrix.

The normal is re-normalized with the [i]dp3[/i], [i]rsq[/i] and [i]mul[/i] instructions. Re-normalizing a vector means align its length to 1. That's because we need a unit vector to calculate our diffuse lighting effect.

To calculate a unit vector, divide the vector by its magnitude or length. The magnitude of vectors is calculated by using the Pythagorean theorem:

x[sup]2[/sup] + y[sup]2[/sup] + z[sup]2[/sup] = m[sup]2[/sup]

The length of the vector is retrieved by

||A|| = sqrt(x[sup]2[/sup] + y[sup]2[/sup] + z[sup]2[/sup])

The magnitude of a vector has a special symbol in mathematics. It is a capital letter designated with two vertical bars: ||A||. So dividing the vector by its magnitude is:

UnitVector = Vector / sqrt(x[sup]2[/sup] + y[sup]2[/sup] + z[sup]2[/sup])

The lines of code in the vertex shader, that handles the calculation of the unit vector looks like this:

[indent][code]; renormalize it
dp3 r1.w, r1, r1 ; (src1.x * src2.x) + (src1.y * src2.y) + (src1.z * src2.z)
rsq r1.w, r1.w ; if (v != 0 && v != 1.0) v = (float)(1.0f / sqrt(v))
mul r1, r1, r1.w ; r1 * r1.w[/code][/indent] [i]dp3[/i] squares the x, y and z components of the temporary register r1, adds them and returns the result in r1.w. [i]rsq[/i] divides 1 by the result in r1.w and stores the result in r1.w. [i]mul[/i] multiplies all components of r1 with r1.w. Afterwards, the result in r1.w is not used anymore in the vertex shader.

The underlying calculation of these three instructions can be represented by the following formula, which is mostly identical to the formula postualted above:

UnitVector = Vector * 1/sqrt(x[sup]2[/sup] + y[sup]2[/sup] + z[sup]2[/sup])

Lighting is calculated with the following three instruction:

[indent][code]dp3 r0, r1, -c[LIGHT_POSITION]
mul r0, r0, c[LIGHT_COLOR] ; modulate against light color
mul oD0, r0, c[DIFFUSE_COLOR] ; modulate against diffuse color[/code][/indent] Nowadays the lighting models used in current games are not based on much physical theory. Game programmers use approximations that try to simulate the way photons are reflected from objects in a rough but efficient manner.

One differentiates usually between different kind of light sources and different reflection models. The common lighting sources are called directional, point light and spotlight. The most common reflections models are ambient, diffuse and specular lighting.

This example uses a directional light source with an ambient and a diffuse reflection model.

[b]Directional Light[/b]

RacorX3 uses a light source in an infinite distance. This simulates the long distance the light beams have to travel from the sun. We treat this light beams as beeing parallel. This kind of light source is called directional light source.

[b]Diffuse Reflection[/b]

Whereas ambient light is considered to be uniform from any direction, diffuse light simulates the emission of an object by a particular light source. Therefore you are able to see that light falls onto the surface of an object from a particular direction by using the diffuse lighting model.

It is based on the assumption that light is reflected equally well in all directions, so the appearance of the reflection does not depend on the position of the observer. The intensity of the light reflected in any direction depends only on how much light falls onto the surface.

If the surface of the object is facing the light source, which means is perpendicular to the direction of the light, the density of the incident light is the highest. If the surface is facing the light source under some angle smaller than 90 degrees, the density is proportionally smaller.

The diffuse reflection model is based on a law of physics called Lambert's Law, which states that for ideally diffuse (totally matte) surfaces, the reflected light is determined by the cosine between the surface normal N and the light vector L.

[center] [b][attachment=4311:diffuselight.jpg]
Figure 12 - Diffuse Lighting[/b]
[/center]
The left figure shows a geometric interpretation of Lambert's Law (see also [RTR]). The middle figure shows the light rays hitting the surface perpendicularly in a distance d apart. The intensity of the light is related to this distance. It decreases as d becomes greater. This is shown by the right figure. The light rays make an angle ? with the normal of the plane. This illustrates that the same amount of light that passes through one side of a right-angle triangle is reflected from the region of the surface corresponding to the triangles hypotenuse. Due to the relationships that hold in a right-angle triangle, the length of the hypotenuse is d/cos ? of the length of the considered side. Thus you can deduce that if the intensity of the incident light is Idirected, the amount of light reflected from a unit surface is Idirected cos ?. Adjusting this with a coefficient that describes reflection properties of the matter leads to the following equation (see also [Savchenko]):

Ireflected = Cdiffuse * Idirected cos ?

This equation demonstrates that the reflection is at its peak for surfaces that are perpendicular to the direction of light and diminishes for smaller angles, because the cosinus value is very large. The light is obscured by the surface if the angles is more than 180 or less than 0 degrees, because the cosinus value is small. You will obtain negative intensity of the reflected light, which will be clamped by the output registers.

In an implementation of this model, you have to find a way to compute cos ?. By definition the dot or scalar product of the light and normal vector can be expressed as

N dot L = ||N|| ||L||cos ?

where ||N|| and ||L|| are the lengths of the vectors. If both vectors are unit length, you can compute cos ? as the scalar or dot product of the light and normal vector. Thus the expression is

Ireflected = Cdiffuse * Idirected(N dot L)

So (N dot L) is the same as the cosine of the angle between N and L, therefore as the angle decrease, the resulting diffuse value is higher. This is exactly what the [i]dp3[/i] instruction and the first [i]mul[/i] instruction are doing. Here is the source with the relevant part of constant.h:

[indent][code]#define LIGHT_POSITION 11
#define MATERIAL_COLOR 14

-----
dp3 r0, r1, -c[LIGHT_POSITION]

mul r0, r0, c[LIGHT_COLOR] ; modulate against light color
mul oD0, r0, c[DIFFUSE_COLOR] ; modulate against material[/code][/indent] So the vertex shader registers are involved in the following way:

[indent][code]r0 = (r1 dot -c11) * c14[/code][/indent] This example modulates additionally against the blue light color in [i]c15[/i]:

[indent][code]r0 = (c15 * (r1 dot -c11)) * c14[/code][/indent] [size="3"][b]Summarize[/b][/size]

RacorX3 shows the usage of an include file to give constants a name that can be remembered in a better way. It shows how to normalize vectors and it just strive the problem of transforming normals, but shows a bullet-proof method to do it.

The example introduces an optimization technique, that eliminates the need to transpose the clip space matrix with the help of the CPU and it shows the usage of a simple diffuse reflection model, that lights the quad on a per-vertex basis.

[size="5"][b]RacorX4[/b][/size]

RacorX4 (see attached resource file) has gotten a few additional features compared to RacorX3. First of all this example will not use a plain quad anymore, it uses instead a B?zier patch class, that shows a round surface with a texture attached to it. To simulate light reflections, a combined diffuse and specular reflection model is used.

[center] [b][attachment=4312:RacorX4.jpg]
Figure 13 - RacorX4[/b]
[/center]
RacorX4 uses the trackball class provided with the Common files framework, to rotate and move the object. You can choose different specular colors with the [lessthan]C> key and zoom in and out with the mouse wheel.

Compared to RacorX3, additionally the texture coordinates will be mapped to input register v7.

DWORD dwDecl[] =
{
D3DVSD_STREAM(0),
D3DVSD_REG(0, D3DVSDT_FLOAT3 ), // input register v0
D3DVSD_REG(3, D3DVSDT_FLOAT3 ), // normal in input register v3
D3DVSD_REG(7, D3DVSDT_FLOAT2), // tex coordinates
D3DVSD_END()
};[/code][/indent] The corresponding layout of the vertex buffer in BPatch.h looks like:

[indent][code]struct VERTICES {
D3DXVECTOR3 vPosition;
D3DXVECTOR3 vNormal;
D3DXVECTOR2 uv;
};

// Declare custom FVF macro.
#define D3DFVF_VERTEX (D3DFVF_XYZ|D3DFVF_NORMAL|D3DFVF_TEX1)[/code][/indent] The third flag used in the custom FVF macro indicates the usage of one texture coordinate pair. This macro is provided to [i]CreateVertexBuffer()[/i]. The vertex shader gets the position values from the vertex buffer via [i]v0[/i], the normal values via [i]v3[/i] and the two texture coordinates via [i]v7[/i].

The vertex shader constants are set in [i]FrameMove()[/i] and [i]RestoreDeviceObjects()[/i]. The file const.h holds the following defines:

[indent][code]#define CLIP_MATRIX 0
#define CLIP_MATRIX_1 1
#define CLIP_MATRIX_2 2
#define CLIP_MATRIX_3 3

#define INVERSE_WORLD_MATRIX 8
#define INVERSE_WORLD_MATRIX_1 9
#define INVERSE_WORLD_MATRIX_2 10

#define LIGHT_VECTOR 11
#define EYE_VECTOR 12
#define SPEC_POWER 13
#define SPEC_COLOR 14[/code][/indent] In [i]FrameMove()[/i] a clipping matrix, the inversed world matrix, an eye vector and a specular color are set:

[indent][code]// set the clip matrix
&(m_matWorld * m_matView * m_matProj),4);

// set the world inverse matrix
D3DXMATRIX matWorldInverse;
D3DXMatrixInverse(&matWorldInverse, NULL, &m_matWorld);

// stuff for specular lighting
// set eye vector E

// specular color
if(m_bKey['C'])
{
m_bKey['C']=0;
++m_dwCurrentColor;
if(m_dwCurrentColor >= 3)
m_dwCurrentColor=0;
}
m_pd3dDevice->SetVertexShaderConstant(SPEC_COLOR, m_vLightColor[m_dwCurrentColor], 1);[/code][/indent] Like in the previous example the concatenated world-, view- and projection matrix are set into [i]c0[/i] - [i]c3[/i], to get transposed in the vertex shader and the inverse 4x3 world matrix is send to vertex shader, to transform the normal.

The eye vector ([i]EYE_VECTOR[/i]) that is used to build up the view matrix is stored in constant register [i]c12[/i]. As shown below, this vector is helpful to build up the specular reflection model used in this upcoming examples.

The user can pick one of the following specular colors with the [lessthan]C> key:

[indent][code]m_vLightColor[0]=D3DXVECTOR4(0.3f,0.1f,0.1f,1.0f);
m_vLightColor[1]=D3DXVECTOR4(0.1f,0.5f,0.1f,1.0f);
m_vLightColor[2]=D3DXVECTOR4(0.0f,0.1f,0.4f,1.0f);[/code][/indent] In [i]RestoreDeviceObjects()[/i] the specular power, the light vector and the diffuse color are set:

[indent][code]// specular power

// light direction
D3DXVECTOR3 vLight(0,0,1);

D3DXCOLOR matDiffuse(0.9f, 0.9f, 0.9f, 1.0f);
m_pd3dDevice->SetVertexShaderConstant(DIFFUSE_COLOR, &matDiffuse, 1);[/code][/indent] As in the previous examples, the light is positioned at (0.0, 0.0, 1.0). There are four specular power values, from which one will be used in the vertex shader.
[indent][i][bquote]To optimize the usage of SetVertexShaderConstant(), the specular color could be set into the fourth component of the light vector, which only used three of its components.[/bquote][/i][/indent] [size="3"][b]The Vertex Shader[/b][/size]

The vertex shader handles a combined diffuse and specular reflection model:

[indent][code]; diffuse and specular vertex lighting

#include "const.h"

vs.1.1
; transpose and transform to clip space
mul r0, v0.x, c[CLIP_MATRIX]

; output texture coords
mov oT0, v7

; transform normal
dp3 r1.x, v3, c[INVERSE_WORLD_MATRIX]
dp3 r1.y, v3, c[INVERSE_WORLD_MATRIX_1]
dp3 r1.z, v3, c[INVERSE_WORLD_MATRIX_2]

; renormalize it
dp3 r1.w, r1, r1
rsq r1.w, r1.w
mul r1, r1, r1.w

; light vector L
; we need L towards the light, thus negate sign
mov r5, -c[LIGHT_VECTOR]

; N dot L
dp3 r0.x, r1, r5

; compute normalized half vector H = L + V
add r2, c[EYE_VECTOR], r5 ; L + V

; renormalize H
dp3 r2.w, r2, r2
rsq r2.w, r2.w
mul r2, r2, r2.w

; N dot H
dp3 r0.y, r1, r2

; compute specular and clamp values (lit)
; r0.x - N dot L
; r0.y - N dot H
; r0.w - specular power n
mov r0.w, c[SPEC_POWER].y ; n must be between -128.0 and 128.0
lit r4, r0

; diffuse color * diffuse intensity(r4.y)
mul oD0, c[DIFFUSE_COLOR], r4.y

; specular color * specular intensity(r4.z)
mul oD1, c[SPEC_COLOR], r4.z[/code][/indent] Compared to the vertex shader in the previous example program, this shader maps the texture coordinates to texture stage 0 with the following instruction:

[indent][code]; output texture coords
mov oT0, v7[/code][/indent] The corresponding texture stage states set for the shading of the pixels in the multitexturing unit will be shown below in the section named "Non-Shader specific code". The instructions that transform the normal and calculate the diffuse reflection were already discussed along with the previous example.

The real new functionality in this shader is the calculation of the specular reflection, which happens in the code lines starting with the [i]add[/i] instruction.

[b]Specular Reflection[/b]

Compared to the diffuse reflection model, the appearance of the reflection depends in the specular reflection model on the position of the viewer. When the direction of the viewing coincides, or nearly coincides, with the direction of specular reflection, a bright highlight is observed. This simulates the reflection of a light source by a smooth, shiny and polished surface.

To describe reflection from shiny surfaces an approximation is commonly used, which is called the Phong illumination model (not to be confused with Pong Shading), named after its creator Phong Bui Tong [Foley]. According to this model, a specular highlight is seen when the viewer is close to the direction of reflection. The intensity of light falls off sharply when the viewer moves away from the direction of the specular reflection.

[center] [b][attachment=4313:specularreflection.jpg]
Figure 14 - Vectors for Specular Reflection[/b]
[/center]
A model describing this effect has to be aware of at least the location of the light source L, the location of the viewer V, and the orientation of the surface N. Additionally a vector R that describes the direction of the reflection might be useful. The half way vector H, that halfs the angle between L and V will be introduced below.

The original Phong formula approximates the falloff of the intensity. It looks like this:

kspecular cos[sup]n[/sup](ss)

where kspecular is a scalar coefficient showing the percentage of the incident light reflected. ss describes the angle between R and V. The exponent [i]n[/i] characterizes the shiny properties of the surface and ranges from one to infinity. Objects that are matte require a small exponent, since they produce a large, dim, specular highlight with a gentle falloff. Shiny surfaces should have a sharp highlight that is modeled by a very large exponent, making the intensity falloff very steep.

Together with the diffuse reflection model shown above, the Phong illumination model can be expressed in the following way:

Ireflected = Idirected((N dot L) + kspecular cos[sup]n[/sup](ss))

cos[sup]n[/sup](ss) can be replaced by using the mechanism of the dot or scalar product of the unit vectors R and V:

Ireflected = Idirected((N dot L) + kspecular (R dot V)[sup]n[/sup])

This is the generally accepted phong reflection equation. As the angle between V and R decreases, the specularity will rise.

Because it is expensive to calculate the reflection vector R (mirror of light incidence around the surface normal), James F. Blinn [Blinn] introduced a way to do this using an imaginary vector H, which is defined as halfway between L and V. H is therefore:

H = (L + V) / 2

When H coincides with N, the direction of the reflection coincides with the viewing direction V and a specular hightlight is observed. So the original formula

(R dot V)[sup]n[/sup]

is expanded to

(N dot ((L + V) / 2))[sup]n[/sup]

or

(N dot H)[sup]n[/sup]

The complete Blinn-Phong model formula looks like:

Ireflected = Idirected((N dot L) + kspecular (N dot H)[sup]n[/sup])

Now back to the relevant code piece that calculates the specular reflection:

[indent][code]; compute normalized half vector H = L + V
add r2, c[EYE_VECTOR], r5 ; L + V

; renormalize H
dp3 r2.w, r2, r2
rsq r2.w, r2.w
mul r2, r2, r2.w

; N

Report Article

## User Feedback

You need to be a member in order to leave a review

## Create an account

Register a new account