# George109

Member

20

190 Neutral

• Rank
Member
1. ## Games Are a Whole New Form of Storytelling

Great article.   Although I disagree that Bioshock Infinite was a good example of storytelling. It was a sequence of scripted in-game cutscenes(even if the player viewed them from a first person perspective, they are still cutscenes) with breaks of dull arena-style gameplay.

3. ## Glicko 2 rating deviaton during periods of inactivity

Thanks for the reply, you are right. Silly mistake, I should have first scale the deviation before applying the formula and then rescale again as the document says. The correct value for the new deviation is 124.646889. That seems more reasonable.   However, this means that it takes 10 years(120 rating periods, assuming one rating period is one month) for deviation to increase from 50 to 124.646889. What if I would like to keep the same rating period(one month), but I would like deviation to increase more rapidly? I think Glicko1 had some constant that could adjust this. I cannot find a similar constant for Glicko2. It has to be some way, otherwise the algorithm would be quite constraining.
4. ## Glicko 2 rating deviaton during periods of inactivity

Hello,   I am trying to use glicko 2 rating system in my game. I have implemented it using the formulas described in the document. However I am not sure I understand correctly how the rating deviation ? of a player is increased after periods of inactivity. The document mentions that when a player does not compete during a "rating period" the deviation ? is updated using this formula:   ? = sqrt(?^2 + ?^2) (could not embed the formula, it seems that image extensions has been disabled)   where ? is rating volatility.   But using that formula it seems that ? is increased in a very slow rate. For example if a "rating period" is one month and a player has ? = 50 and ? = 0.06(the recommended value for volatility), after 10 years(120 "rating periods") of inactivity ?' = 50.004189. Is that the expected behavior or I am missing something? Is there any way to adjust how "quickly" ? is increased during player inactivity?   Let's hope that there is someone that has used this algorithm in the forums. Thanks in advance!
5. ## .X file template bug on 64 bit.

I reported it a few minutes ago. I'll let you know if someone replies to me.
6. ## A* Path Finding and Dungeon Generation

Quote:Original post by Sneftel First of all, A* finds the shortest path from a single start node to any of one or more goal nodes. In other words, you can't (efficiently) use it to determine the best point on each of the rooms to use to place the corridor, only to find the best point on the second room, given a point on the first room. Finding the shortest path from multiple start nodes to multiple goal nodes is also possible. You push all start nodes into the open list, like you would do with a single start node. You can also sort them by initializing the costFromStart to a different value for each start node. Quote:Original post by Skateblind The starting and finishing points are randomly picked on a wall, so there is no problem with that. You can just pass all the points to A* as start and goal nodes and the A* will find the shortest path.
7. ## A* Path Finding and Dungeon Generation

I think that A* is fully appropriate for enviroments like yours. I suggest you to use it.
8. ## .X file template bug on 64 bit.

I tried adding both to be sure. An instance of the template and the template declaration at the start of the file. Something like this: template AnimTicksPerSecond{ <9E415A43-7BA6-4a73-8743-B73D47E88476> DWORD AnimTicksPerSecond; } AnimTicksPerSecond { 24; }
9. ## .X file template bug on 64 bit.

I don't think that they will bother with it. It is not a very serious issue and it is on DirectX9.
10. ## .X file template bug on 64 bit.

I am programming an application for loading skinned models from .X files. I am using Visual Studio 2005, Windows Vista 64 bit and DirectX9. Because my animations are playing too fast, I am using the AnimTicksPerSecond template for scaling the speed. The description for this template on March 2008 SDK is: --------------- AnimTicksPerSecond Describes an animation scale factor. template AnimTicksPerSecond { < 9E415A43-7BA6-4a73-8743-B73D47E88476 > DWORD AnimTicksPerSecond; } Where: AnimTicksPerSecond - A scale factor to convert animation time stamps to global time. Its value divides the animation time by either 30, 60, 24 or 25. --------------- So I placed on my .X files the following line to slow the speed: AnimTicksPerSecond { 24; } When I compile the application for 32 bit machines the animation is scaled and it plays fine. When I compile for 64 bit machines the animation continues to play too fast, as if the DirectX ignores the existance of the AnimTicksPerSecond template that I placed on my .X files. So I installed the SkinnedMesh sample of DirectX SDK and I tried it with one of my .X files. Same results with my application. The animation played fine on 32 bit executable but too fast with 64 bit executable. Any ideas of what it's going wrong? --------------------- UPDATE: I also test it with tiny.x model from direct sdk samples. I add the AnimTicksPerSecond template in the tiny.x file to scale the animation speed. On the 32 bit executable of the SkinnedModel sample it worked. On 64 bit executable ignored the template. --------------------
11. ## Normal transformation in vertex shaders

Ok, I understood, a non-uniform scale affects the direction of the normal. That's right. The normal should be scaled when the scale is not uniform. Quote:Original post by deffer (*) Why would you do that in the first place? How does the vertex shader look with that tactic used? Every object in the scene keeps its world transformation as a translation vector3, a scale vector3 and a rotation Matrix3x3( this is done for faster updating of the objects ). Before rendering an object, I combine them( translation, rotation and scale ) to a 4x4 matrix and I pass it to the shader. So there is nothing special in the shader code, it's similar to code i wrote at the first post.
12. ## Normal transformation in vertex shaders

Thanks for the reply deffer. Actually, I keep scale, translation and rotation to separate variables. Rotation is kept in a separate 3x3 Matrix and scale in a separate Vector3. So I have no problem such as extracting the rotation from a world matrix. What troubles me is that the time needed for passing the extra matrix to the shader may be greater than the time gained from the fewer instructions( no normalize() call ). I have looked to some proffesional game shaders released for modding and they always prefer to transform the normal with the world matrix and then normalize it. It should exists some reason for doing that.
13. ## Normal transformation in vertex shaders

Hi, I see in many shaders that vertex normals are being transformed with the world transformation( including scale ) and then they are normalized to regain the unit length. The code is usually like this: matrix4x4 m_worldViewProj; matrix4x4 m_world; VS_OUTPUT vs_main(VS_INPUT In) { VS_OUTPUT Out = (VS_OUTPUT)0; Out.Pos = mul(In.Pos,m_worldViewProj); float3 world_normal = normalize(mul(In.Norm, (float3x3)m_world)); // etc... } Wouldn't it be better if we pass a 3x3 rotation matrix ( without the scale, only world rotation ) and transform the normal without normalizing it? Or the speed we gain because of the lesser instructions( no normalize() call ) is lost due to the extra parameter pass( the extra 3x3 matrix for the rotation )? matrix4x4 m_worldViewProj; matrix4x4 m_world; matrix3x3 m_worldRotation; VS_OUTPUT vs_main(VS_INPUT In) { VS_OUTPUT Out = (VS_OUTPUT)0; Out.Pos = mul(In.Pos,m_worldViewProj); float3 world_normal = mul(In.Norm, m_worldRotation); // etc... }
14. ## Retrieving triangles from a vertex buffer. Help needed.

That seems to work neneboricua. Thanks for replies!
15. ## Retrieving triangles from a vertex buffer. Help needed.

I'm using the index buffer to get the vertices of the triangles from the vertex buffer. Then with the vertices, I create the triangles. The triangles are stored as a D3DTRIANGLE_LIST in the buffers. I suspect that there is something wrong with the way that I retrieving the vertices from the vertex buffer. Is there any other way to retrieve the position of the vertices from a vertex buffer? // Get the vertex declaration of the mesh. D3DVERTEXELEMENT9 vertexDeclaration[MAX_FVF_DECL_SIZE]; pMesh->GetDeclaration( vertexDeclaration ); // Find the offset of the declaration. DWORD offset = 0; // Scan the vertex declaration array. int i = 0; while( vertexDeclaration[i].Type != D3DDECLTYPE_UNUSED ) { if( vertexDeclaration[i].Usage == D3DDECLUSAGE_POSITION ) { offset = vertexDeclaration[i].Offset; break; } i++; } // Lock the vertex and index buffer of the mesh. void* pData = 0; WORD* pIndexData = 0; pMesh->LockVertexBuffer( D3DLOCK_READONLY, (void**)&pData ); pMesh->LockIndexBuffer( D3DLOCK_READONLY, (void**)&pIndexData ); // Scan the triangles. D3DXVECTOR3 vertexPosition[3]; Triangle3 tri; for( DWORD i = 0, iTriangle = 0; iTriangle < pMeshContainer->MeshData.pMesh->GetNumFaces(); i+=3, iTriangle++ ) { // Get the position. memcpy( &vertexPosition[0], &((D3DVERTEXELEMENT9*)pData)[pIndexData[i]] + offset, sizeof(D3DXVECTOR3) ); memcpy( &vertexPosition[1], &((D3DVERTEXELEMENT9*)pData)[pIndexData[i+1]] + offset, sizeof(D3DXVECTOR3) ); memcpy( &vertexPosition[2], &((D3DVERTEXELEMENT9*)pData)[pIndexData[i+2]] + offset, sizeof(D3DXVECTOR3) ); // Build the triangle. tri.origin() = vertexPosition[0]; tri.edge0() = vertexPosition[1] - vertexPosition[0]; tri.edge1() = vertexPosition[2] - vertexPosition[0]; // Store it in the triangle list. memcpy( pTriangleList[iTriangle].triangle, &tri, sizeof(Triangle3) ); } pMesh->UnlockVertexBuffer(); pMesh->UnlockIndexBuffer();