Jump to content

  • Log In with Google      Sign In   
  • Create Account

BenS1

Member Since 22 Mar 2006
Offline Last Active Today, 10:48 AM

Topics I've Started

How to Separate the Rendering Code from the Game Code

14 October 2014 - 07:38 AM

I've been writing games as a hobby for about 25 years now, and in my day job I'm a LEad C++ developer writing realtime performance critical highly multithreaded code, however there is one thing I've always struggled with and that is how to separate the Rendering Code from the rest of the game code. It seems that lots of games books suggest that you should do it but they don't really explain how to do it.

 

For example, lets imagine you have a Command & Conquer style RTS game, internally you might have classes representing the terrain, objects on the terrain, units (Player, remote and/or AI) and bullets/rockets/lasers etc.

 

These classes contain all the information relating to the object, for example for a player unit you might have the units position, speed, health, waypoint list etc. You may also have rendering data too such as models\meshes, textures etc.

 

In this simple model (Which has worked fine for me for many years) you can have standard methods on the objects such as Update() and Draw(). It all works great.

 

The problem is, what if I want to port my DirectX game onto another platform that uses say OpenGL? My rendering code is so embedded into the game code that it'll be a major job to port it.

 

The books seem to suggest that you should have separate classes for your game objects and your rendering objects. So for example you might have a Unit class that supports an Update method, and a separate UnitRenderer class that supports the Draw class. Whilst this makes sense and would indeed make porting much easier, the problem is that the UnitRenderer would need to know so much about the internals of the Unit class that the Unit class would be forced to expose methods, attributes and data that we otherwise wouldn't want to expose from the Unit class. i.e. we might have a nice clean interface exposed from the Unit class that encapsulates the internal implementation and is sufficient for the rest of the game, but the need to expose internal information to the UnitRenderer means that your nice clean Unit class becomes a mess and exposes its internal implementation details to anything that can access the Unit class.

 

Plus of course there are performance implications of doing this.

 

How do people typically separate their games code from their Rendering code?

 

Thanks in advance

Ben


Building Assimp in Visual Studio

04 July 2014 - 04:17 PM

Hi,

 

Looking at the Assimp website it suggests that Assimp builds for several versions of Visual Studio upto and including VS2012 (VS11), however when I download the full install I don't see any solution files or instructions on how to build it.

 

Has anyone built it from the Visual Studio IDE, and if so please can you share your instructions on how you did it?

 

Thanks in advance

Ben

 


Strange Text Rendering Crash In Release Mode Only

03 January 2013 - 05:45 AM

I've got a strange crash that only occurs in a Release build and so its very difficult to debug the problem.

 

If I try single stepping through the code, the current instruction jumps around all over the place (Presumably the code is being reordered as an optimization), and when the crash does occur the debugger put the current instructions somewhere completely illogical, I don't think this is a genuine stack corruption issue, I think the debugger just isn't very good when working with release builds.

 

Anyway, getting to the point....

 

The code is a text rendering code (Originally written by MJP), which I've adapted to work with DirectXMath, and sure enough this is where the problem seems to lie.

 

I the text rendering class I have this structure defined:

    struct SpriteDrawData
    {
        XMMATRIX Transform;
        XMFLOAT4 Color; 
        XMFLOAT4 DrawRect;   
    };    

 

Then I have this member that uses this struct:

	SpriteDrawData				m_TextDrawData[constMaxBatchSize];

 

(Where constMaxBatchSize == 1000)

 

Then in the actual code I do this for each character:

 

	m_TextDrawData[currentDraw].Transform = XMMatrixMultiply(transform, XMMatrixTranslation(x_offset, y_offset, 0.0f));
	m_TextDrawData[currentDraw].Color = color;
	m_TextDrawData[currentDraw].DrawRect.x = desc.X;
	m_TextDrawData[currentDraw].DrawRect.y = desc.Y;
	m_TextDrawData[currentDraw].DrawRect.z = desc.Width;
	m_TextDrawData[currentDraw].DrawRect.w = desc.Height;            
	currentDraw++;

 

However, the first line causes a crash.

If I comment out the first line then it doesn't crash.

 

I've even tried changing it to this:

	m_TextDrawData[currentDraw].Transform =	XMMatrixIdentity();
	m_TextDrawData[currentDraw].Color = color;
	m_TextDrawData[currentDraw].DrawRect.x = desc.X;
	m_TextDrawData[currentDraw].DrawRect.y = desc.Y;
	m_TextDrawData[currentDraw].DrawRect.z = desc.Width;
	m_TextDrawData[currentDraw].DrawRect.w = desc.Height;            
	currentDraw++;

 

But it still crashes (Even when currentDraw == 0, so its not that currentDraw is overflowing).

 

I thought it might be an alignment problem, so I changed the structure definition to:

    __declspec(align(32)) struct SpriteDrawData
    {
        XMMATRIX Transform;
        XMFLOAT4 Color; 
        XMFLOAT4 DrawRect;   
    }; 

 

But that didn't help, and I don't think its necessary as XMMATRIX is already defined with 16 byte alignment.

 

Its really strange. It doesn't do it in a debug build, and if I comment out the code then the release build is fine too but obviously I don't see any text.

 

Any help would be appreciated.

 

Thanks

Ben


Loading Meshes\Models in DirectX 11 using VS2012 and Win8 SDK

01 January 2013 - 02:00 PM

Firstly, Happy New Year all smile.png

 

So, now that the DirectX SDK is now part of the Windows SDK and the Windows SDK doesn't include the Effects Framework, is there any support at all for loading models\meshes when using the latest version?

 

It seems strange that Visual studio 2012 can now load and display model files directly in the IDE, yet DirectX itself doesn't seem to offer any support for using the files.

 

I know that you can still get the Effects Framework to work with the latest version of the SDK, but I'd rather not use a deprecated library (Plus I wasn't really a fan of the Effects Framework anyway, but that's irrelevant). So, do I just have to dig out the file specifications for the model formats I want to support and right my own code to load them, or are there existing libraries out there that I could use?

 

Thanks in advance

Ben

 

 


Emulating a HLSL Sample Operation on the CPU

16 December 2012 - 03:01 PM

Hi

In my game I'm generating a terrain in realtime on the GPU using an fBm noise function. This works fine.

Now what I need is to be able to workout the height of a given position on the landscape on the CPU side so that I can do things such as position objects on the surface of the terrain. This means I need to port the GPU code over to the CPU.

So far I've come across a couple of interesting things when running the HLSL code in the Visual Studio 2012 debugger and then doing the same for the CPU....

Firstly I can the code through the GPU and for a given pixel the debugger showed:
PosW = x = 1214.034000000, z = -1214.034000000

When I tried putting these same values in the CPU side I found that on the CPU the values were actually:
PosW = x 1214.03406, z = -1214.03406

i.e. the CPU couldn't represent the GPU float values exactly. Is this to be expected? Do they not both conform to the exact IEEE standard for a float? I noticed this in several places.

The second problem, the one I'm struggling with is regarding how to emulate a Sample HLSL function on the CPU.

Here is what I have in HLSL:
const float mipLevel = 0;
float4 n;
n.x = g_NoiseTexture.SampleLevel(SamplerRepeatPoint, i, mipLevel).x;
n.y = g_NoiseTexture.SampleLevel(SamplerRepeatPoint, i, mipLevel, int2(1,0)).x;
n.z = g_NoiseTexture.SampleLevel(SamplerRepeatPoint, i, mipLevel, int2(0,1)).x;
n.w = g_NoiseTexture.SampleLevel(SamplerRepeatPoint, i, mipLevel, int2(1,1)).x;
(Where g_NoiseTexture is a 256x256 grayscale texture. I thinkt he Sampler name is self explanatory)

And I've tried to emulate this on the CPU like this:
float nx, ny, nz, nw;
nx = m_NoiseData[(int)(iy)  % 256][(int)(ix) % 256] / 256.0f;
ny = m_NoiseData[(int)(iy)  % 256][(int)(ix + 1.0f) % 256] / 256.0f;
nz = m_NoiseData[(int)(iy + 1.0f)  % 256][(int)(ix) % 256] / 256.0f;
nw = m_NoiseData[(int)(iy + 1.0f)  % 256][(int)(ix + 1.0f) % 256] / 256.0f;
(Where m_NoiseData is defined as "unsigned char m_NoiseData[256][256]" and contains the same data as the g_NoiseTexture)

The problem is that I'm getting completely different for n.x vs nx, n.y vs ny etc.

I've even tried to compensate for pixel centres by adding 0.5f to each pixel like this:

float nx, ny, nz, nw;
nx = m_NoiseData[(int)(iy + 0.5f)  % 256][(int)(ix + 0.5f) % 256] / 256.0f;
ny = m_NoiseData[(int)(iy + 0.5f)  % 256][(int)(ix + 1.5f) % 256] / 256.0f;
nz = m_NoiseData[(int)(iy + 1.5f)  % 256][(int)(ix + 0.5f) % 256] / 256.0f;
nw = m_NoiseData[(int)(iy + 1.5f)  % 256][(int)(ix + 1.5f) % 256] / 256.0f;

Any ideas?

Any help much appreciated.

Kind Regards
Ben

PARTNERS