Jump to content

  • Log In with Google      Sign In   
  • Create Account


Dbowler92

Member Since 08 Sep 2012
Offline Last Active May 25 2014 04:17 PM
-----

Topics I've Started

Creating a SRV to a ID3D11Buffer resource.

10 October 2013 - 07:59 PM

Hi all.

 

Buffer Creation:

    //Create LLSEB
    D3D11_BUFFER_DESC llsebBD;
    llsebBD.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
    llsebBD.ByteWidth = sizeof(UINT) * totalTiles * 2; //2 valus per tile (start and end)
    llsebBD.CPUAccessFlags = 0;
    llsebBD.MiscFlags = 0;
    llsebBD.StructureByteStride = 0;
    llsebBD.Usage = D3D11_USAGE_DEFAULT;
    //Create buffer - no initial data again.
    HR(d3dDevice->CreateBuffer(&llsebBD, 0, &lightListStartEndBuffer));

UAV Creation:

	//UAV
	D3D11_UNORDERED_ACCESS_VIEW_DESC llsebUAVDesc;
	llsebUAVDesc.Format = DXGI_FORMAT_R32G32_UINT;
	llsebUAVDesc.Buffer.FirstElement = 0;
	llsebUAVDesc.Buffer.Flags = 0;
	llsebUAVDesc.Buffer.NumElements = totalTiles;
	llsebUAVDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
	HR(d3dDevice->CreateUnorderedAccessView(lightListStartEndBuffer, &llsebUAVDesc, 
		&lightListStartEndBufferUAV));

SRV Creation:

	D3D11_SHADER_RESOURCE_VIEW_DESC llsebSRVDesc;
	//2 32 bit UINTs per entry (xy)
	llsebSRVDesc.Format = DXGI_FORMAT_R32G32_UINT;
	llsebSRVDesc.Buffer.FirstElement = 0;
	llsebSRVDesc.Buffer.ElementOffset = 0;
	llsebSRVDesc.Buffer.NumElements = totalTiles;
	llsebSRVDesc.Buffer.ElementWidth = sizeof(UINT) * 2;
	llsebSRVDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER;
	HR(d3dDevice->CreateShaderResourceView(lightListStartEndBuffer, &llsebSRVDesc, 
		&lightListStartEndBufferSRV));

The idea here is to create a Light List Start End Buffer (llseb) for a tile based forward renderer. In my solution, I have a buffer of 32 bit uints (2 per tile). Creating the buffer works fine, as does the UAV (Writes in the compute shader seem to work perfectly fine). It is, however, with the SRV I have problems (reading the buffer in a pixel shader in order to visualise how many lighs affect a tile). Especially, the following line:

llsebSRVDesc.Buffer.ElementWidth = sizeof(UINT) * 2;

With this configuration, it produces the following output when we want to visualise this buffer (note that I have faked the output from the compute shader - I'm also having some issues with lights not passing the frustum/sphere intesection test, but i'll be working on that later).

 

NotWorking_zps5f093e3a.png

 

Now, this clearly isnt what we want. :/

 

However, if we change the questioning line to:

    llsebSRVDesc.Buffer.ElementWidth = totalTiles;

We get the following output indicating that we are reading the buffer correctly:
 

Working_zps55bbe41e.png

 

 

This seems correct. But I am seriously at a loss as to why this works. According to MSDN:

 

ElementWidth

Type: UINT

The width of each element (in bytes). This can be determined from the format stored in the shader-resource-view description.

 

Which seems to indicate that I was right in my first example (Or, at least, sort of correct). Or am I miss understanding this completly?

 

Many thanks.

 

Dan


Light Culling on the GPU for a Tile Based Forward Renderer

04 October 2013 - 06:14 AM

Hi,

 

I originally posted this in the graphics programming section of the forum. But it probably also applies to this portion of the forum aswell. smile.png

 

 

Hi,

 

I'm currently in the process of developing a tile based forward renderer (Like Forward+) for a university project and this week have begun work on the light culling stage utilising the compute shader. I am a little inexperianced with regards to the compute shader and parallel programming but I do know that you should avoid dynamic branching as much as possible. 

 

The following code is what I have so far. In the application code (not shown), I launch enough thread groups to cover the entire screen with each thread group containing n,n,1 threads (n is 8 in my example all though you can change this). Thus, a thread per pixel.

 

The idea is for each thread in a thread group to sample the depth buffer (MSAA depth buffer supported) and store this in shared memory. Then loop through all these depth values and work out which is the largest.(I am supporting transparancy with the trivial solution of having the minimum depth as 0. This was suggested in GPU Pro 4 as a potential solution for the time being. I have an idea which uses 2 depth buffers to better support transparancy, but for the time being, we will stick with what I've got).

 

However, in order to do this, I have had to add an if statement. This if statement checks the group thread ID to ensure that only the first thread in every thread group executes the code - or at least, that was the idea (EDIT: Bold and Enlarge didnt work - you are hunting for this line "if (groupThreadID.x == 0 && groupThreadID.y == 0 && groupThreadID.z == 0)"):
 

//Num threads per thread group. One thread per pixel. This is a 2D thread group. Shared
//memory will be used (shared between threads in the same thread group) to cache the
//depth value from the depth buffer. For this pass, we have one thread group per tile
//and a thread per pixel in the tile.
[numthreads (TILE_PIXEL_RESOLUTION, TILE_PIXEL_RESOLUTION, 1)]
void CSMain(
    in int3 groupID            : SV_GroupID,           //Uniquely identifies each thread group
    in int3 groupThreadID      : SV_GroupThreadID,     //Uniquely identifies a thread inside a thread group.
    in int3 dispatchThreadID   : SV_DispatchThreadID,  //Uniquely identifies a thread relative to ALL threads generated in a Dispatch() call
    uniform bool useMSAA)                              //MSAA Enabled? Sample MSAA DEPTH Buffer
{
    //Stage 1 - We sample the depth buffer and work out what the maximum Z value is for every tile.
    //This is done by looping through all the depth values of the pixels that share the same
    //tile and comparing them.
    //
    //We then write this data to the MaxZTileBuffer RWBuffer (Optional). This data is handy
    //for stage 2 where we can cull more lights based on this maximum depth value.
    
    //Load value to sample the depth buffer.
    int3 sampleVal = int3( (dispatchThreadID.x), (dispatchThreadID.y), 0);
    
    //This is the sampled depth value from the depth buffer for this given thread.
    //If msaa is used (Ie, MSAA enabled depth buffer), this will represent the average
    //of the 4 samples.
    float sampledDepth = 0.0f;

    //Sample MSAA buffer if MSAA is enabled
    [flatten]
    if (useMSAA)
    {
        //Sample the buffer (4 times)
        float s0 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 0).r;
        float s1 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 1).r;
        float s2 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 2).r;
        float s3 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 3).r;
        
        //Average out.
        sampledDepth = (s0 + s1 + s2 + s3) / 4.0f;
    }
    //Sample standard buffer  
    else
        sampledDepth = ZPrePassDepthBuffer.Load(sampleVal).r;

    //Write to the (thread group) shared memory and wait for threads to complete their work.
    depthCache[groupThreadID.x][groupThreadID.y] = sampledDepth;
    GroupMemoryBarrierWithGroupSync();

    //Only one thread in the thread group should preform this check and then the
    //write to our MaxTileZBuffer.
    if (groupThreadID.x == 0 && groupThreadID.y == 0 && groupThreadID.z == 0)
    {
        //Loop through the shared pool (essentially a 2D array) and workout what the maximum
        //value is for this thread group (Tile).
        //Store the maximum value in the following floating point variable - Init to 0.0f.
        float maxDepthVal = 0.0f;
        //Unroll - i and j are knowen at compile time - The compiler will happily
        //do this for us, but just incase.
        [unroll]
        for (int i = 0; i < TILE_PIXEL_RESOLUTION; i++)
        {  
            for (int j = 0; j < TILE_PIXEL_RESOLUTION; j++)
            {
                //Extract value from the depth cache.
                float depthToTest = depthCache[i][j];
                //Test and update if larger than the already stored value.
                if (depthToTest > maxDepthVal)
                    maxDepthVal = depthToTest;
            }//End for j
        }//End for i

        //Write to Maz Z Tile Buffer for use in the second pass - Only one thread in a thread
        //group should do this.
        //
        //Note, we can turn this feature off (buffer writes
        //are expensive. Since this is actually not required - though needed if we want
        //to visualise the tiles max depth values, a #define has been used to enable/disable
        //the buffer write. )
#ifdef SHOULD_WRITE_TO_MAX_Z_TILE_BUFFER
        int tilesX = ceil( (rtWidth  / (float)TILE_PIXEL_RESOLUTION) );
        int maxZTileIndex = groupID.x + (groupID.y * tilesX);
        MaxZTileBuffer[maxZTileIndex] = maxDepthVal;  
#endif

        //Stage 2 - In this stage, we will build our LLIB (Light List Index Buffer - essentially
        //a list which indexes in to the List List Buffer and tells us which lights affect
        //a given tile) and our LLSEB (Light List Start End Buffer - a list which indexes
        //in to the LLIB).

    }//End if(...)
}//End CSMain()

Now, my limited understanding of dynamic branching in shaders suggests this may not be a good move - Each thread will execute the code within the code block and then decide if it should be kept or discarded later (In order to ensure parallelism??). Not ideal, particually when I am going to do >3000 sphere/frustum instersections in stage 2.

 

Or, since all but one thread in a thread group will not actually execute the code, does the hardware actually do a pretty good job in handling this sytem? (63 threads not doing it in our example.

 

(My test GPU is: 650M (Laptop that I work on in uni) or 570 (at home - Will be upgrading to a 770/680 in the near future. I am led to belive that on modern GPUs, dynamic branching is less of a concern, all though I dont really understand why tongue.png)

 

Many thanks,

 

Dan.


Light Culling on the GPU for a Tile Based Forward Renderer.

04 October 2013 - 05:52 AM

Hi,

 

I'm currently in the process of developing a tile based forward renderer (Like Forward+) for a university project and this week have begun work on the light culling stage utilising the compute shader. I am a little inexperianced with regards to the compute shader and parallel programming but I do know that you should avoid dynamic branching as much as possible. 

 

The following code is what I have so far. In the application code (not shown), I launch enough thread groups to cover the entire screen with each thread group containing n,n,1 threads (n is 8 in my example all though you can change this). Thus, a thread per pixel.

 

The idea is for each thread in a thread group to sample the depth buffer (MSAA depth buffer supported) and store this in shared memory. Then loop through all these depth values and work out which is the largest.(I am supporting transparancy with the trivial solution of having the minimum depth as 0. This was suggested in GPU Pro 4 as a potential solution for the time being. I have an idea which uses 2 depth buffers to better support transparancy, but for the time being, we will stick with what I've got).

 

However, in order to do this, I have had to add an if statement. This if statement checks the group thread ID to ensure that only the first thread in every thread group executes the code - or at least, that was the idea (EDIT: Bold and Enlarge didnt work - you are hunting for this line "if (groupThreadID.x == 0 && groupThreadID.y == 0 && groupThreadID.z == 0)"):
 

//Num threads per thread group. One thread per pixel. This is a 2D thread group. Shared
//memory will be used (shared between threads in the same thread group) to cache the
//depth value from the depth buffer. For this pass, we have one thread group per tile
//and a thread per pixel in the tile.
[numthreads (TILE_PIXEL_RESOLUTION, TILE_PIXEL_RESOLUTION, 1)]
void CSMain(
    in int3 groupID            : SV_GroupID,           //Uniquely identifies each thread group
    in int3 groupThreadID      : SV_GroupThreadID,     //Uniquely identifies a thread inside a thread group.
    in int3 dispatchThreadID   : SV_DispatchThreadID,  //Uniquely identifies a thread relative to ALL threads generated in a Dispatch() call
    uniform bool useMSAA)                              //MSAA Enabled? Sample MSAA DEPTH Buffer
{
    //Stage 1 - We sample the depth buffer and work out what the maximum Z value is for every tile.
    //This is done by looping through all the depth values of the pixels that share the same
    //tile and comparing them.
    //
    //We then write this data to the MaxZTileBuffer RWBuffer (Optional). This data is handy
    //for stage 2 where we can cull more lights based on this maximum depth value.
    
    //Load value to sample the depth buffer.
    int3 sampleVal = int3( (dispatchThreadID.x), (dispatchThreadID.y), 0);
    
    //This is the sampled depth value from the depth buffer for this given thread.
    //If msaa is used (Ie, MSAA enabled depth buffer), this will represent the average
    //of the 4 samples.
    float sampledDepth = 0.0f;

    //Sample MSAA buffer if MSAA is enabled
    [flatten]
    if (useMSAA)
    {
        //Sample the buffer (4 times)
        float s0 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 0).r;
        float s1 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 1).r;
        float s2 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 2).r;
        float s3 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 3).r;
        
        //Average out.
        sampledDepth = (s0 + s1 + s2 + s3) / 4.0f;
    }
    //Sample standard buffer  
    else
        sampledDepth = ZPrePassDepthBuffer.Load(sampleVal).r;

    //Write to the (thread group) shared memory and wait for threads to complete their work.
    depthCache[groupThreadID.x][groupThreadID.y] = sampledDepth;
    GroupMemoryBarrierWithGroupSync();

    //Only one thread in the thread group should preform this check and then the
    //write to our MaxTileZBuffer.
    if (groupThreadID.x == 0 && groupThreadID.y == 0 && groupThreadID.z == 0)
    {
        //Loop through the shared pool (essentially a 2D array) and workout what the maximum
        //value is for this thread group (Tile).
        //Store the maximum value in the following floating point variable - Init to 0.0f.
        float maxDepthVal = 0.0f;
        //Unroll - i and j are knowen at compile time - The compiler will happily
        //do this for us, but just incase.
        [unroll]
        for (int i = 0; i < TILE_PIXEL_RESOLUTION; i++)
        {  
            for (int j = 0; j < TILE_PIXEL_RESOLUTION; j++)
            {
                //Extract value from the depth cache.
                float depthToTest = depthCache[i][j];
                //Test and update if larger than the already stored value.
                if (depthToTest > maxDepthVal)
                    maxDepthVal = depthToTest;
            }//End for j
        }//End for i

        //Write to Maz Z Tile Buffer for use in the second pass - Only one thread in a thread
        //group should do this.
        //
        //Note, we can turn this feature off (buffer writes
        //are expensive. Since this is actually not required - though needed if we want
        //to visualise the tiles max depth values, a #define has been used to enable/disable
        //the buffer write. )
#ifdef SHOULD_WRITE_TO_MAX_Z_TILE_BUFFER
        int tilesX = ceil( (rtWidth  / (float)TILE_PIXEL_RESOLUTION) );
        int maxZTileIndex = groupID.x + (groupID.y * tilesX);
        MaxZTileBuffer[maxZTileIndex] = maxDepthVal;  
#endif

        //Stage 2 - In this stage, we will build our LLIB (Light List Index Buffer - essentially
        //a list which indexes in to the List List Buffer and tells us which lights affect
        //a given tile) and our LLSEB (Light List Start End Buffer - a list which indexes
        //in to the LLIB).

    }//End if(...)
}//End CSMain()

Now, my limited understanding of dynamic branching in shaders suggests this may not be a good move - Each thread will execute the code within the code block and then decide if it should be kept or discarded later (In order to ensure parallelism??). Not ideal, particually when I am going to do >3000 sphere/frustum instersections in stage 2.

 

Or, since all but one thread in a thread group will not actually execute the code, does the hardware actually do a pretty good job in handling this sytem? (63 threads not doing it in our example.

 

(My test GPU is: 650M (Laptop that I work on in uni) or 570 (at home - Will be upgrading to a 770/680 in the near future. I am led to belive that on modern GPUs, dynamic branching is less of a concern, all though I dont really understand why :P)

 

Many thanks,

 

Dan.


Undergraduate thesis ideas in 3D graphics.

29 August 2013 - 12:10 PM

Hi all,

 

*tl;dr is at the bottom – sorry for the long one!

 

I'm about to begin my third and final year at uni (studying Games Programming) and it's certainly getting close to the time I need to be thinking of a thesis for my final year individual project. (400 hours which includes a major piece of software (60%), 10k word report (30%) and 1 hour long demo/viva (10%))

 

My degree, sadly, doesn't do all that much in the way of the programmable pipeline - actually, it does absolutely nothing! So I have spent the past 3/4 months studying and developing a rendering demo application using Direct3D11 via the book (Excellent resource) "Direct3D11: A Shader Approach" (Frank D. Luna).

 

That is now complete and I'm looking to get started on the actual project as soon as physically possible (The code, anyway). But, I'm a little out of ideas on what would make a reasonable undergrad project.

 

I've had discussions with one of my lecturers (It's worth noting he’s a physics kind of guy and not 3D graphics. But no one in the uni - as far as I know - is a graphics kind of guy so I'm stuck with it like so - which doesn’t bother me... I like learning about this kind of stuff!) and proposed an idea: The original idea was to develop a forward render, classic deferred renderer and a light pre pass renderer and compare the performance, limitations, etc of the three renderers in a series of different scenes (Eg: low poly+high light count, high poly+low light count, etc). He liked the idea - it did need some refining, however.

 

I was told that the whole idea of a thesis is to pick a subject, explore subject and come up with a project based (I have found very little in the way od data that can compare these systems around the web - just general statements) on an academic survey (Looking at posts on the internet and then developing an idea for a project based on, for example, known issues that could be fixed...). This I did, and I came up with a slightly better idea (in my opinion. Given that this is a software engineering project, I should be focusing more on producing an (almost) complete system rather than producing basic systems and comparing them).

 

The current idea is to develop an (almost) complete deferred renderer, in several stages:

 

Stage 1: Develop a (classic) deferred render from scratch (and basic framework to make it semi-user friendly. All though I don’t fancy spending too much time on this (Eg, abstracting ID3D11Buffers is a waste of time)). This would be a bog standard system with a larger G-Buffer (if you have read "Practical Rendering and Computation with Direct3D11", its pretty much going to be the exact same system. 4 RTVs, full screen quads for the lighting pass). I've begun this stage anyway as I feel it would be interesting to get a deferred renderer done even if it doesn’t form

the basis of a thesis.

 

Stage 2: Optimise the GBuffer pass. (Reducing the size of the Gbuffer)

 

Stage 3: Optimise the Lighting Pass.(reducing the number of pixel shader invocations)

 

Stage 4: Enhancements - Solving the multiple material, AA and transparency issues (This one I may simply use a forward renderer unless I find a very good resource on a good and simple system).

 

Stage 5: Other enhancements – I'll try and get a lovely original scene made which uses stuff from my forward rendering demo and port them in to a deferred system (Displacement mapping, character animation, maybe SSAO, etc). This proves that my deferred system isn’t limited to texture mapping, interpolated colours and normal mapping.

 

I'll speak to my lecturer tomorrow and suggest the idea to him. As this has come from the academic research (Ie, there’s a lot of talk about deferred rendering systems and their cons), it should (*fingers crossed*) be well received.

 

I feel the above idea is a decent one given that I have had to learn Direct3D11 by myself (which itself took ~200 hours of the 400 hours)... but it's not too far off what people are doing in second year at Teesside University (a single module). All though, they are tough Direct3D10, taught deferred rendering systems and given a framework.

 

So my two questions are : 1) Do you think that the above system is a good project idea for an undergrad thesis? Any amendments? Comments? And 2) Do you have any other suggestions for a thesis which relates to graphics programming in some way (Personally, I would like to stay away from a pure GPGPU programming thesis, but I'm open to something)

 

tl;dr

 

  1. Just starting third year – looking for ideas/suggestions for a graphics programming thesis.

  2. Current idea is to develop a deferred renderer (and basic framework) in 4/5 stages: Basic implementation, optimisation (Gbuffer pass and lighting pass), enhancements(Solve multiple material, AA and transparency issues in some way)

  3. Is this a good idea baring in mind I have had to learn Direct3D11 by myself (Not taught it at uni – and thus ~200 hours spent learning it should be credited to the 400 hours allocated to the project)? Would it be respected by games companies looking to emply me as a junior/intern graphics programmer?

  4. Suggestions/comments welcomed on the deferred system.

  5. Suggestions for something else related to graphics programming welcomed.

 

Many thanks,

 

Dan


Changing Screen Resolution and Windowed<->Full Screen Toggle

09 September 2012 - 12:13 AM

Hi,

Long time admirer, first time poster.

Im currently in the process of making my own 2D DirectX9 / Direct3D Game Engine based off the one in Beginning Game Programming book. So far so good, but I have hit a bit of a brick wall with this one. Despite my googling, I cant seem to find much on these topics despite the fact that an awful lot of games allow the player the option to A) Toggle full screen mode (Alt+Enter) and B) Change the screen resolution.

A)

I understand how to easily set up my game to initially run in full screen mode or windowed mode by changing the D3DPRESENT PARAMS and changing a few settings in the CreateWindow function... But what I cant seem to do is give the user the option to press Alt+Enter in game to change to full screen.

I have read that I need to change the D3DPRESENT PARAMS and call Reset() via the Direct3D Device object... But this blew up my game and im not sure why.

To be honest, I may just force the player to use full screen mode for my game anyway... but it would be nice to have this option implemented within the game engine for future projects.

B)

This is the feature I really think I need to implement - and yet, I cant seem to find anything out there to help.

Currently I have a global const int that represents my Screen Height and Width. This is then used when creating the window and also in the D3DPRESENT PARAMS when creating the back buffer.

One other thing that has bugged me regarding resolution changes - How do you guys handle rendering? For example, say my window is of size 1024 by 768. Lets say I render my Texture (with the transformation matrix) at point 1024, 768 (centred) - This obviously renders right at the bottom right of the screen... but lets say the user changes the resolution settings to 1920 by 1080. That image is no longer rendered at the bottom right of the screen.

How would you guys suggest solving this problem? Do you change your rendering code so that you render at a percentage along X and down Y? EG: You render the same Texture at (100,100) and this would happily render the image at the bottom right of the screen no matter how the user has set his/her settings. Of course, this brings about another issue - the texture would still remain the same size in pixels as before the resolution change so you would end up with blank spaces in between textures (I plan to use tile maps)...

Or should I draw to a back buffer that is 1920 by 1080 and scale the surface down when rendering on screen??

Anyway... thanks for any help Posted Image

EDIT1: Im also looking for some good open source DX9 Direct3D Game engines to have a look at - Afterall, this is the best way to learn a new tecnology. Ive been looking arround the forum and have seen some references to engine developed and available on the forum - but I cant seem to find any :(

EDIT2:This is GameMain - Handles directX init and the game loop.

[source lang="cpp"]#include "GameMain.h"#include <iostream>//Direct3D variablesLPDIRECT3D9 d3d = NULL;LPDIRECT3DDEVICE9 d3ddev = NULL;LPDIRECT3DSURFACE9 backbuffer = NULL;LPD3DXSPRITE spriteHandler = NULL;const string APPTITLE = "Game Engine - Windowed";const int SCREENW = 1024;const int SCREENH = 768;//Game Loop#define MAXIMUM_FRAME_RATE 60#define MINIMUM_FRAME_RATE 15#define UPDATE_INTERVAL (1.0 / MAXIMUM_FRAME_RATE)#define MAX_CYCLES_PER_FRAME (MAXIMUM_FRAME_RATE / MINIMUM_FRAME_RATE)//DX9 rendering engine set upbool DirectXRenderingEngineInit(HWND window){ //initialize Direct3D d3d = Direct3DCreate9(D3D_SDK_VERSION); if (!d3d) return false; //set Direct3D presentation parameters D3DPRESENT_PARAMETERS d3dpp; ZeroMemory(&d3dpp, sizeof(d3dpp)); d3dpp.Windowed = true; d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD; d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8; d3dpp.BackBufferCount = 1; d3dpp.BackBufferWidth = SCREENW; d3dpp.BackBufferHeight = SCREENH; d3dpp.hDeviceWindow = window; //create Direct3D device d3d->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, window, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &d3dpp, &d3ddev); if (!d3ddev) return false; //get a pointer to the back buffer surface d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &backbuffer); //Clear to black d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0,0,0), 1.0f, 0); //Init the sprite handler D3DXCreateSprite(d3ddev, &spriteHandler); return true;}void GameRun(HWND window){ //Game Loop will be handled here //It is a Time based, fixed interval system //Based off the system located here: //sacredsoftware.net/tutorials/Animation/TimeBasedAnimation.xhtml static double lastFrameTime = 0.0; static double cyclesLeftOver = 0.0; double currentTime; double updateIterations; //Get instance of the sharedInputManager - Basically init it here static InputManager* sharedInputManager = InputManager::GetInstance(window); //Get instance of the sharedGameController static GameController* sharedGameController = GameController::GetInstance(); //Set initial scene if this is the first run through. static bool hasInitialSceneBeenSet = false; if (!hasInitialSceneBeenSet) { sharedGameController->SetInitialScene(); hasInitialSceneBeenSet = true; } //GAME LOOP LARGE_INTEGER queryLargeInt; QueryPerformanceCounter(&queryLargeInt); currentTime = (double)queryLargeInt.QuadPart; updateIterations = ((currentTime - lastFrameTime) + cyclesLeftOver); if (updateIterations > (MAX_CYCLES_PER_FRAME * UPDATE_INTERVAL)) updateIterations = (MAX_CYCLES_PER_FRAME * UPDATE_INTERVAL); while (updateIterations > UPDATE_INTERVAL) { updateIterations -= UPDATE_INTERVAL; //Ask current scene to update logic sharedGameController->UpdateCurrentSceneWithDelta(UPDATE_INTERVAL/MAX_CYCLES_PER_FRAME); } cyclesLeftOver = updateIterations; lastFrameTime = currentTime; //Render current scene sharedGameController->RenderCurrentScene();}void GameEnd(){ //Clean up DX if (spriteHandler) spriteHandler->Release(); if (d3ddev) d3ddev->Release(); if (d3d) d3d->Release();}[/source]

PARTNERS