Jump to content
  • Advertisement
  • entries
  • comments
  • views

Entries in this blog


3D stereoscopic rendering

Hi everyone!

Despite of my studies I'm always trying to find the time to write articles about my works. Today, let's talk about stereo 3D. The question is: Could I transform my game Theolith into a stereo 3D game?

First, I'll talk to people how don't really know how to render a stereo 3D scene: If you want to see a 3D photo, movie or video game, you must have 2 images: one for the left eye, and the other for the right eye. The first problem is: How put the good image on the good eye. The first solution that has been used since a very long time is to remove the red component of the right image and to remove the green and blue components of the left image and you add the 2 images. Then, with anaglyph glasses each eye just sees its image. The problem of this method is that we have a loss of colors because we must remove color components for each image. Since some years, technologies evolve and we are now able to get the good image to the good eye without losing color data. The best quality way is to display alternatively the right and the left image synchronized with glasses that will alternatively mask the left and the right eye.

Ok now about my game. I just have old cyan-red anaglyph glasses so I will program for this solution, but the idea is the same that with latest technologies: I need to get 2 correct images. For a non-stereo 3D game, you just have one camera defined by its position and its look-at point. In a stereo 3D game, you will need 2 cameras (left and right eyes). Their positions are easy to find you just have to slightly shift the non-stereo camera on the left and on the right. The problem is to find the look-at (or focus) point.

Imagine that you want to render this scene:

Here is the configuration with one camera: as all 3rd person games, the camera looks at the hero.

So my first idea with 2 cameras was to look at the hero:

But here is the problem: The more an object is far, the more it will be shift between the two eyes, the more the brain won't be able to deal with those 2 images. Here, if our eyes focus on the robot, the 3D seems ok, but if they focus on the mountains, it's ugly.

The problem is that we don't know where the eyes of the player will be, so I concluded that, if you want to have a correct 3D image, the only way is to focus on the furthest object:

Here is the 3D image. Despite of the bad quality of this JPEG, if you have cyan-red anaglyph glasses, you could notice that the 3D isn't so bad.

About the first solution, I think there is a way if you really want to force the player's eyes to focus on the robot and not on the mountains; it could be to implement a depth of field. It would render something like that (this ugly image is just an idea made with an image editor):

Ok! That's enough for today! The main message of this article is to understand the difficulty to choose the good focus. Be aware of the problem if you intend to start stereo 3D stereoscopic programming. For the moment, I'm a beginner in this topic, so if you see any mistakes, if you have any remarks, suggestions I would be happy to know them!
In a further article, I intend to speak of stereo programming: How to configure shaders in order to blend the left and the right image. But this blending isn't the more interesting, because it's quite deprecated: more and more people will have electronic 3D glasses, so they don't need a blend but just to display alternately the right and the left camera.

Have a great day!




Very soft pathfinding algorithm

Very soft pathfinding algorithm by Richard Geslot, algorithm used in Theolith

When we hear "pathfinding", we often directly think "expensive algorithm". My aim is to present you a very soft pathfinding algorithm.

The algorithm must start with these hypotheses:

Point B wants to go to Point A (1)
Between A and B, there is no obstacle (2)

After the start of the algorithm, just (1) is needed.

The hypothesis (2) may seem restrictive. But imagine any RPG: You are the hero (Point A) and there is an enemy NPC (Point B). B is aggressive: when he knows that you are here, he goes to you.
But to know that you are here, (2) must be true. Why? Because to know that you are here, it's because he saw you, so there is no wall between you and your target, so (2) is check. Another case is that you cast it a spell, but if (2) isn't true, you can't cast your spell because there is a wall between you and your target.
To conclude, if in your game, a fight can only begin if (2) is true: Don't use expensive A*, look at this algorithm, it's certainly sufficient for your game.

The algorithm
I called it "little thumb" because it's exactly the same idea. When it is at x meters of the old "stone", Point A drops a new "stone". All stone's positions are in an array.
Let's see the algorithm with images:

1)At the beginning, B doesn't know that A is here. A approaches B. B knows that A is here because he saw him or because A casts a spell on B. (1) and (2) are check, the algorithm start.

2)Is the segment [AB] free (no collision)? Yes. B follows the BA vector

3)A is very fast and B very slow. While B was walking along BA vector, A moves behind the wall and stops. So the segment [AB] is not free

4)The segment [AB] isn't free, so B needs a new vector to follow. No problem, B tests [B,Stone1]: no there is obstacle, the same with Stone2... Until Stone6. [B,Stone6] is free so B,Stone6 is the new vector followed by B. When B is on Stone6, B try again [BA], [B,Stone1]... It's ok with Stone2. And when B is on Stone2, it's now ok to go directly to A.

Discussion about the algorithm

The "little thumb" drops "stone" at each X meters. The more X is small, the more the array that contains stone's positions is large. About this Array, it's certain that it will not save all stones since the beginning of the game. Make sure that it saves the last Y. If B has traversed the entire stone's array without finding any stone that can be reached without hindrance: there is no possible way to reach A. But you could considerer that as an error, because as we seen upper, (2) must always be check at the start, to use this algorithm.
Be careful, in this example, when A is behind the wall, A stops. If A continues to walk, the StoneX become the Stone(X+1).
It's not obliged that B reaches a stone to re-compute its following vector. We could compute at each update of B.
The difficulty is to know if a segment has a collision or not. Anyway, if you use this algorithm you should have implement that before, because you needed to know if the enemy "sees" (there is no wall between them) the target. Or because you needed to know if the character can cast his spell to his target (again, that means that there is nothing between them). The collision between a shape and a segment isn't the subject of this article. Briefly, about me, I wrap my shapes in simple cubes composed of 12 triangles (in 3 dimensions) and I use the D3DXIntersectTri function with DirectX.




Directx 9 to Directx 10

Currently I'm moving my Theolith 3D engine from DirectX 9 to DirectX 10. It's not an easy and fast work. I realized that all architecture of my application has to be changed.

For the moment I'm working on the game's interface. So I'm just dealing with my 2d engine (sprite).

here are some stuff that I had to change:

about DXUT, there are not a lot of changes:
* of course, OnD3D9...() is now names OnD3D10...()
* I have noticed a small change: LostDevice(...) and ResetDevice(...) are now named SwapChainReleasing(...) and SwapChainResized(...)
* the CAPS has disappeared, because if a GPU can support DX10 it is supposed to support all the DX10 functionality. for example here are the resource limits
* the Surface type has disappeared

now, about the pipeline, forget what you know :)
have disappeared

Now it's time to think "shader pipeline"
the shader pipeline is now the core of my DX10 3D engine. Even if you need to render a simple sprite, you have to write a shader and to understand the pipeline.
the DX10 pipeline is here.

for example: with my sprite engine, I don't need depth, stencil and culling.
in Directx 9, it's something like that:

pd3dDevice->SetRenderState(D3DRS_ZENABLE, FALSE);
pd3dDevice->SetRenderState(D3DRS_STENCILENABLE, FALSE);
pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE);

Now, in Directx 10, you have to know that depth and stencil are managed by the Output-Merger Stage and that the culling is managed by the Rasterizer Stage
so, it's something like that:

/////////AT THE BEGINNING////////

//create a Rasterizer state
D3D10_RASTERIZER_DESC rasterizerState;
rasterizerState.CullMode = D3D10_CULL_NONE;
rasterizerState.FillMode = D3D10_FILL_SOLID;
rasterizerState.FrontCounterClockwise = true;
rasterizerState.DepthBias = 0;
rasterizerState.DepthBiasClamp = 0.0f;
rasterizerState.SlopeScaledDepthBias = 0.0f;
rasterizerState.DepthClipEnable = false;
rasterizerState.ScissorEnable = false;
rasterizerState.MultisampleEnable = true;
rasterizerState.AntialiasedLineEnable = false;

ID3D10RasterizerState* pRState;
if( FAILED( pd3dDevice->CreateRasterizerState( &rasterizerState, &pRState) ))

//create a Depth Stencil State

// Depth test parameters
dsDesc.DepthEnable = false;
dsDesc.DepthWriteMask = D3D10_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D10_COMPARISON_LESS;

// Stencil test parameters
dsDesc.StencilEnable = false;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;

// Stencil operations if pixel is front-facing
dsDesc.FrontFace.StencilFailOp = D3D10_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilDepthFailOp = D3D10_STENCIL_OP_INCR;
dsDesc.FrontFace.StencilPassOp = D3D10_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilFunc = D3D10_COMPARISON_ALWAYS;

// Stencil operations if pixel is back-facing
dsDesc.BackFace.StencilFailOp = D3D10_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilDepthFailOp = D3D10_STENCIL_OP_DECR;
dsDesc.BackFace.StencilPassOp = D3D10_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilFunc = D3D10_COMPARISON_ALWAYS;

ID3D10DepthStencilState * pDSState;

// Create depth stencil state
if( FAILED( pd3dDevice->CreateDepthStencilState(&dsDesc, &pDSState)))

/////////BEFORE RENDERING////////

pd3dDevice->OMSetDepthStencilState(pDSState, 1);//I don't understand what does the second argument mean...

/////////AT THE END////////

if ( pRState ) { pRState->Release(); pRState = NULL; }
if ( pDSState ) { pDSState->Release(); pDSState = NULL; }

The DX10 is heavier, but in my opinion it's very more clear than DX9. Because in DX9 you have only 3 or 4 functions to change all the pipeline states and it's very disorderly. seriously, even after a lot of years of DX9, I'm always feeling sick when i see that.
In DX10, you have to deal with more functions but each function have a precise work and it's more understandable.
moreover, each function belongs to one stage, so you have 7 families of function.

for example, before the rendering, you have to deal with
pd3dDevice->IASet...() to set the Input-Assembler Stage
pd3dDevice->VSSet...() to set the Vertex-Shader Stage
pd3dDevice->GSSet...() to set the Geometry-Shader Stage
pd3dDevice->SOSet...() to set the Stream-Output Stage
pd3dDevice->RSSet...() to set the Rasterizer Stage
pd3dDevice->PSSet...() to set the Pixel-Shader Stage
pd3dDevice->OMSet...() to set the Output-Merger Stage

To conclude this article, I'm very happy to learn DirectX 10, I think I will understand the GPU a lot more than with Directx 9. DX9 would seem to be lighter and easier but that hide a very disorderly side. In Directx 10, you are obliged to write more code, so you are obliged to understand more. A lot of outdated parts of Direct3D have been removed and it's a good thing.

to learn more:

Thanks for reading




Working on the sun/sky

This week I have worked on the sun&sky in Theolith. Actually, the sun is just one point light computes thanks to my new deferred shading engine.




Trying to deferred...


Currently, I'm trying to deferred in my game Theolith :

For the moment, the final result doesn't work yet... But I got the feeling that I'm near to succeed :)

The main resource that I use is the article "Deferred Shading Tutorial" of Fabio Policarpo and Francisco Fonseca. It's a very helpful tutorial.




Shaders in Starcraft 2

Hello, today I would like to present you the different computes that Starcraft 2 uses in order to render the final shape.

Actually this is an extract of a Wiki page that I have started today:
http://code.google.com/p/libm3/wiki/Shaders. The aim of this Wiki page is to contains all shaders equations that compute the Starcraft 2 models rendering.

I think this is an interesting research not just for the knowledge of Starcraft 2, but for the deferred rendering generally. So if you have knowledge to share, your help is welcome!

Here are the different render steps for the Barracks :


The wireframe isn't the more interesting, but it's important to view it.

full bright diffuse only

It seems to be the exact diffuse texture.

diffuse only

diffuse lighting only

emissive only

I think this effect is computed from the emissive texture. It's not exactly the same because, for example, the texture doesn't contain the halo information.

lighting only

normal only

it's normals of the vertices. remember that, in Starcraft 2, Z (blue color) is the up vector

normal map only

It's the normal texture.

problem visualization

It's beautifull... but I have no idea what it is...

shadows only

specular lighting only

specular only

doesn't really look like the specular texture...

UV mapping


Final combination of all effects!

If you are interested into this research, you will find more informations here

Have a nice day (or night) !




First entry

Hello community of Gamedev.net. I decided to start this journal in order to speak about my various programming works. This first post will be dedicated to my main work: my game Theolith. I'm happy to present you my last success: import M3 files (Starcraft 2 models file format) in Theolith: and here is a Colossus! :

And now if you are interesting in this work, I advise you to read my article about the rendering of M3 models with C++ & Directx9



  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!