Jump to content
  • Advertisement

RichardGe

Member
  • Content Count

    112
  • Joined

  • Last visited

Community Reputation

236 Neutral

About RichardGe

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming
  1. RichardGe

    algorithm challenge

    wow very interesting, I just read quickly the 2 articles. I think you pointed exactly what I was searching. I'll try to find more time later to take a closer look, and implement something.
  2. RichardGe

    algorithm challenge

    @alvaro , even if we don't care of memory, if N and K are very big it could take years to fill the C[N][K] array . But for decent N and K it seems it's working @frob, I had no knowledge on the Fisher-Yates shuffle, but it looks like it's using several random numbers... However you can tell me that I told "Random function can manage numbers of any size you want", so we can generate small random numbers from a big number. Concerning your algorithm, I assume it's working correctly even if some indices are equal in the first step, so I think it would work too, with a very big initial random to generate the list of indices. In all the cases, it seems we need something big to initialize the algorithm. I don't know if we can do it, only with 1 initial random number R between 1 and binomial(N,K), I mean this is the number of possibilities. and then with a smart algorithm, have the Rth combination. (without iterating throw all combinations).
  3. RichardGe

    algorithm challenge

    no it's from my mind
  4. RichardGe

    algorithm challenge

    Hi, little challenge I was thinking... you have a lotto grid of N numbers. you have to pick K different numbers. order doesn't matter. my challenge : write an algorithm that generates 1 random pick combination. example with N=6 and K=3 , 1 pick would be (1,3,6) constraints : - you can call Random function only 1 time. ( this Random function can manage numbers of any size you want ) - N and K are very big - I mean, you can't iterate all the possibilities - all pick combinations must have the same chance to be generated on my side, I think I have an idea of solution, but I'm not happy with it, because it would imply some complex looping. I assume the start is to generate a random number between 1 and binomial(N,K)...
  5. RichardGe

    binary alpha

    Hodgman, thanks for the fast solution. C0lumbo, thanks for the interesting details.
  6. RichardGe

    binary alpha

    Hi,   In order to avoid the need of sorting the objects that need a simple binary alpha mask, I'm forcing  the alpha transparancy to be only 0 or 1.       To do that, I have added this line in all my pixel shaders : //force alpha to 0 or 1 if ( finalColor.w > 0.5f ) { finalColor.w = 1.0f; } else { finalColor.w = 0.0f; } This doesn't seem very beautiful. I'm wondering if there is another way to do that ? Like a DirectX implemented feature ?   Thank you !
  7. RichardGe

    A Verlet based approach for 2D game physics

    Very good article! Since a long time I'm searching how to develop a physic engine like Angry Birds. I think this algorithm is a good way to do that. Moreover, I'm sure this approach could be used for a 3D game physic.
  8. RichardGe

    Welcome To The New Gamedev.Net!

    love it !
  9. RichardGe

    3D stereoscopic rendering

    Interesting, I didn't know this method of interleaving the left and right eye views.
  10. RichardGe

    3D stereoscopic rendering

    Hi everyone! Despite of my studies I'm always trying to find the time to write articles about my works. Today, let's talk about stereo 3D. The question is: Could I transform my game Theolith into a stereo 3D game? First, I'll talk to people how don't really know how to render a stereo 3D scene: If you want to see a 3D photo, movie or video game, you must have 2 images: one for the left eye, and the other for the right eye. The first problem is: How put the good image on the good eye. The first solution that has been used since a very long time is to remove the red component of the right image and to remove the green and blue components of the left image and you add the 2 images. Then, with anaglyph glasses each eye just sees its image. The problem of this method is that we have a loss of colors because we must remove color components for each image. Since some years, technologies evolve and we are now able to get the good image to the good eye without losing color data. The best quality way is to display alternatively the right and the left image synchronized with glasses that will alternatively mask the left and the right eye. Ok now about my game. I just have old cyan-red anaglyph glasses so I will program for this solution, but the idea is the same that with latest technologies: I need to get 2 correct images. For a non-stereo 3D game, you just have one camera defined by its position and its look-at point. In a stereo 3D game, you will need 2 cameras (left and right eyes). Their positions are easy to find you just have to slightly shift the non-stereo camera on the left and on the right. The problem is to find the look-at (or focus) point. Imagine that you want to render this scene: Here is the configuration with one camera: as all 3rd person games, the camera looks at the hero. So my first idea with 2 cameras was to look at the hero: But here is the problem: The more an object is far, the more it will be shift between the two eyes, the more the brain won't be able to deal with those 2 images. Here, if our eyes focus on the robot, the 3D seems ok, but if they focus on the mountains, it's ugly. The problem is that we don't know where the eyes of the player will be, so I concluded that, if you want to have a correct 3D image, the only way is to focus on the furthest object: Here is the 3D image. Despite of the bad quality of this JPEG, if you have cyan-red anaglyph glasses, you could notice that the 3D isn't so bad. About the first solution, I think there is a way if you really want to force the player's eyes to focus on the robot and not on the mountains; it could be to implement a depth of field. It would render something like that (this ugly image is just an idea made with an image editor): Ok! That's enough for today! The main message of this article is to understand the difficulty to choose the good focus. Be aware of the problem if you intend to start stereo 3D stereoscopic programming. For the moment, I'm a beginner in this topic, so if you see any mistakes, if you have any remarks, suggestions I would be happy to know them! In a further article, I intend to speak of stereo programming: How to configure shaders in order to blend the left and the right image. But this blending isn't the more interesting, because it's quite deprecated: more and more people will have electronic 3D glasses, so they don't need a blend but just to display alternately the right and the left camera. Have a great day!
  11. RichardGe

    Very soft pathfinding algorithm

    @ Matias Goldberg: Absolutely, this path-finding algorithm is easy and soft but could be quickly limited. However, I think it can apply in many games... For example, it would not surprise me that World of Warcraft uses this kind of algo...
  12. Very soft pathfinding algorithmby Richard Geslot, algorithm used in Theolith Introduction When we hear "pathfinding", we often directly think "expensive algorithm". My aim is to present you a very soft pathfinding algorithm. The algorithm must start with these hypotheses: Point B wants to go to Point A (1) Between A and B, there is no obstacle (2) After the start of the algorithm, just (1) is needed. The hypothesis (2) may seem restrictive. But imagine any RPG: You are the hero (Point A) and there is an enemy NPC (Point B). B is aggressive: when he knows that you are here, he goes to you. But to know that you are here, (2) must be true. Why? Because to know that you are here, it's because he saw you, so there is no wall between you and your target, so (2) is check. Another case is that you cast it a spell, but if (2) isn't true, you can't cast your spell because there is a wall between you and your target. To conclude, if in your game, a fight can only begin if (2) is true: Don't use expensive A*, look at this algorithm, it's certainly sufficient for your game. The algorithm I called it "little thumb" because it's exactly the same idea. When it is at x meters of the old "stone", Point A drops a new "stone". All stone's positions are in an array. Let's see the algorithm with images: 1)At the beginning, B doesn't know that A is here. A approaches B. B knows that A is here because he saw him or because A casts a spell on B. (1) and (2) are check, the algorithm start. 2)Is the segment [AB] free (no collision)? Yes. B follows the BA vector 3)A is very fast and B very slow. While B was walking along BA vector, A moves behind the wall and stops. So the segment [AB] is not free 4)The segment [AB] isn't free, so B needs a new vector to follow. No problem, B tests [B,Stone1]: no there is obstacle, the same with Stone2... Until Stone6. [B,Stone6] is free so B,Stone6 is the new vector followed by B. When B is on Stone6, B try again [BA], [B,Stone1]... It's ok with Stone2. And when B is on Stone2, it's now ok to go directly to A. Discussion about the algorithm The "little thumb" drops "stone" at each X meters. The more X is small, the more the array that contains stone's positions is large. About this Array, it's certain that it will not save all stones since the beginning of the game. Make sure that it saves the last Y. If B has traversed the entire stone's array without finding any stone that can be reached without hindrance: there is no possible way to reach A. But you could considerer that as an error, because as we seen upper, (2) must always be check at the start, to use this algorithm. Be careful, in this example, when A is behind the wall, A stops. If A continues to walk, the StoneX become the Stone(X+1). It's not obliged that B reaches a stone to re-compute its following vector. We could compute at each update of B. The difficulty is to know if a segment has a collision or not. Anyway, if you use this algorithm you should have implement that before, because you needed to know if the enemy "sees" (there is no wall between them) the target. Or because you needed to know if the character can cast his spell to his target (again, that means that there is nothing between them). The collision between a shape and a segment isn't the subject of this article. Briefly, about me, I wrap my shapes in simple cubes composed of 12 triangles (in 3 dimensions) and I use the D3DXIntersectTri function with DirectX.
  13. Very soft pathfinding algorithm Introduction
  14. RichardGe

    Directx 9 to Directx 10

    Currently I'm moving my Theolith 3D engine from DirectX 9 to DirectX 10. It's not an easy and fast work. I realized that all architecture of my application has to be changed. For the moment I'm working on the game's interface. So I'm just dealing with my 2d engine (sprite). here are some stuff that I had to change: about DXUT, there are not a lot of changes: * of course, OnD3D9...() is now names OnD3D10...() * I have noticed a small change: LostDevice(...) and ResetDevice(...) are now named SwapChainReleasing(...) and SwapChainResized(...) * the CAPS has disappeared, because if a GPU can support DX10 it is supposed to support all the DX10 functionality. for example here are the resource limits * the Surface type has disappeared now, about the pipeline, forget what you know :) pd3dDevice->SetRenderState, pd3dDevice->SetSamplerState, pd3dDevice->SetTextureStageState ... have disappeared Now it's time to think "shader pipeline" the shader pipeline is now the core of my DX10 3D engine. Even if you need to render a simple sprite, you have to write a shader and to understand the pipeline. the DX10 pipeline is here. for example: with my sprite engine, I don't need depth, stencil and culling. in Directx 9, it's something like that: pd3dDevice->SetRenderState(D3DRS_ZENABLE, FALSE); pd3dDevice->SetRenderState(D3DRS_STENCILENABLE, FALSE); pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE); Now, in Directx 10, you have to know that depth and stencil are managed by the Output-Merger Stage and that the culling is managed by the Rasterizer Stage so, it's something like that: /////////AT THE BEGINNING//////// //create a Rasterizer state D3D10_RASTERIZER_DESC rasterizerState; rasterizerState.CullMode = D3D10_CULL_NONE; rasterizerState.FillMode = D3D10_FILL_SOLID; rasterizerState.FrontCounterClockwise = true; rasterizerState.DepthBias = 0; rasterizerState.DepthBiasClamp = 0.0f; rasterizerState.SlopeScaledDepthBias = 0.0f; rasterizerState.DepthClipEnable = false; rasterizerState.ScissorEnable = false; rasterizerState.MultisampleEnable = true; rasterizerState.AntialiasedLineEnable = false; ID3D10RasterizerState* pRState; if( FAILED( pd3dDevice->CreateRasterizerState( &rasterizerState, &pRState) )) { ERROR_MESSAGE("CreateRasterizerState"); } //create a Depth Stencil State D3D10_DEPTH_STENCIL_DESC dsDesc; // Depth test parameters dsDesc.DepthEnable = false; dsDesc.DepthWriteMask = D3D10_DEPTH_WRITE_MASK_ALL; dsDesc.DepthFunc = D3D10_COMPARISON_LESS; // Stencil test parameters dsDesc.StencilEnable = false; dsDesc.StencilReadMask = 0xFF; dsDesc.StencilWriteMask = 0xFF; // Stencil operations if pixel is front-facing dsDesc.FrontFace.StencilFailOp = D3D10_STENCIL_OP_KEEP; dsDesc.FrontFace.StencilDepthFailOp = D3D10_STENCIL_OP_INCR; dsDesc.FrontFace.StencilPassOp = D3D10_STENCIL_OP_KEEP; dsDesc.FrontFace.StencilFunc = D3D10_COMPARISON_ALWAYS; // Stencil operations if pixel is back-facing dsDesc.BackFace.StencilFailOp = D3D10_STENCIL_OP_KEEP; dsDesc.BackFace.StencilDepthFailOp = D3D10_STENCIL_OP_DECR; dsDesc.BackFace.StencilPassOp = D3D10_STENCIL_OP_KEEP; dsDesc.BackFace.StencilFunc = D3D10_COMPARISON_ALWAYS; ID3D10DepthStencilState * pDSState; // Create depth stencil state if( FAILED( pd3dDevice->CreateDepthStencilState(&dsDesc, &pDSState))) { ERROR_MESSAGE("CreateDepthStencilState"); } /////////BEFORE RENDERING//////// pd3dDevice->RSSetState(pRState); pd3dDevice->OMSetDepthStencilState(pDSState, 1);//I don't understand what does the second argument mean... /////////AT THE END//////// if ( pRState ) { pRState->Release(); pRState = NULL; } if ( pDSState ) { pDSState->Release(); pDSState = NULL; } The DX10 is heavier, but in my opinion it's very more clear than DX9. Because in DX9 you have only 3 or 4 functions to change all the pipeline states and it's very disorderly. seriously, even after a lot of years of DX9, I'm always feeling sick when i see that. In DX10, you have to deal with more functions but each function have a precise work and it's more understandable. moreover, each function belongs to one stage, so you have 7 families of function. for example, before the rendering, you have to deal with pd3dDevice->IASet...() to set the Input-Assembler Stage pd3dDevice->VSSet...() to set the Vertex-Shader Stage pd3dDevice->GSSet...() to set the Geometry-Shader Stage pd3dDevice->SOSet...() to set the Stream-Output Stage pd3dDevice->RSSet...() to set the Rasterizer Stage pd3dDevice->PSSet...() to set the Pixel-Shader Stage pd3dDevice->OMSet...() to set the Output-Merger Stage To conclude this article, I'm very happy to learn DirectX 10, I think I will understand the GPU a lot more than with Directx 9. DX9 would seem to be lighter and easier but that hide a very disorderly side. In Directx 10, you are obliged to write more code, so you are obliged to understand more. A lot of outdated parts of Direct3D have been removed and it's a good thing. to learn more: http://wiki.gamedev.net/index.php/D3DBook:Quick_Start_For_Direct3D_9_Developer http://msdn.microsoft.com/en-us/library/bb205123(v=VS.85).aspx http://msdn.microsoft.com/en-us/library/ee416396(v=VS.85).aspx Thanks for reading
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!