Jump to content
  • Advertisement

dhanji

Member
  • Content Count

    184
  • Joined

  • Last visited

Community Reputation

192 Neutral

About dhanji

  • Rank
    Member
  1. Hi, I am trying to split my loaded mesh into 2. If a triangle exists above a certain plane I add it into indexbuffer A, and below into B. Obviously some triangles may be present in both. However, my question is, when using ID3DXMesh how would I render just A or B? I am currently copying A or B into the mesh's index buffer using a Lock operation. But this seems slower than is possible. I was thinking it is possible just to SetStreamSource to the mesh's VertexBuffer and SetIndices to A or B, however, the problem with this is that I dont know what texture to set to which triangle. Is there an easy way to separate the vertices into texture subsets? I know the mesh already has them stored in the attribute buffer this way. How would I go about extracting that information? So I can use DrawIP instead of ID3DXMesh::DrawSubset thanks in adv.
  2. Didnt know if this belonged here or physics so I posted in both (pl delete if its not-thanks) I am having trouble getting my loaded .X mesh to be set as the terrain mesh in tokamak. I follow the steps in the adamdawes tutorial carefully and substitute vertices and triangles from my loade mesh. I even test everything to make sure it is correct (printing out the vertex/face data to a log) before calling SetTerrainMesh() but when I call this method the app crashes (like a null-pointer exception). What could be the reason for this? Does tokamak not like closed terrain meshes? The SDK docs are woefully unhelpful. Any help would be appreciated. thanks. The following is the code I use to fill in the triange and index data: IDirect3DIndexBuffer9* iBuffer; mesh->GetIndexBuffer(&iBuffer); DWORD numVertices = mesh->GetNumVertices(); DWORD numFaces = mesh->GetNumFaces(); _terrainVertices = new neV3[numVertices]; _terrainTriangles = new neTriangle[numFaces]; _terrain.vertexCount = numVertices; _terrain.triangleCount = numFaces; //read in vertex data from loaded scene mesh for (DWORD i = 0; i < numVertices; i++) { D3DXVECTOR3 *vTex = &mesh->vertices.position; _terrainVertices.Set(vTex->x, vTex->y, vTex->z); } _terrain.vertices = _terrainVertices; //read in triangle data (index buffer) from loaded mesh VOID* indices; iBuffer->Lock(0, NULL, &indices, D3DLOCK_READONLY); WORD* index = (WORD*)indices; //16-bit index-buffer UINT p = 0; for (i = 0; i < numFaces; i++, p += 3) { _terrainTriangles.indices[0] = index; _terrainTriangles.indices[1] = index[p+1]; _terrainTriangles.indices[2] = index[p+2]; _terrainTriangles.materialID = 0; _terrainTriangles.flag = neTriangle::NE_TRI_TRIANGLE; } iBuffer->Unlock(); _terrain.triangles = _terrainTriangles; _simulator->SetTerrainMesh(&_terrain); //crashes here!
  3. I am having trouble getting my loaded .X mesh to be set as the terrain mesh in tokamak. I follow the steps in the adamdawes tutorial carefully and substitute vertices and triangles from my loade mesh. I even test everything to make sure it is correct (printing out the vertex/face data to a log) before calling SetTerrainMesh() but when I call this method the app crashes (like a null-pointer exception). What could be the reason for this? Does tokamak not like closed terrain meshes? The SDK docs are woefully unhelpful. Any help would be appreciated. thanks. The following is the code I use to fill in the triange and index data: IDirect3DIndexBuffer9* iBuffer; mesh->GetIndexBuffer(&iBuffer); DWORD numVertices = mesh->GetNumVertices(); DWORD numFaces = mesh->GetNumFaces(); _terrainVertices = new neV3[numVertices]; _terrainTriangles = new neTriangle[numFaces]; _terrain.vertexCount = numVertices; _terrain.triangleCount = numFaces; //read in vertex data from loaded scene mesh for (DWORD i = 0; i < numVertices; i++) { D3DXVECTOR3 *vTex = &mesh->vertices.position; _terrainVertices.Set(vTex->x, vTex->y, vTex->z); } _terrain.vertices = _terrainVertices; //read in triangle data (index buffer) from loaded mesh VOID* indices; iBuffer->Lock(0, NULL, &indices, D3DLOCK_READONLY); WORD* index = (WORD*)indices; //16-bit index-buffer UINT p = 0; for (i = 0; i < numFaces; i++, p += 3) { _terrainTriangles.indices[0] = index; _terrainTriangles.indices[1] = index[p+1]; _terrainTriangles.indices[2] = index[p+2]; _terrainTriangles.materialID = 0; _terrainTriangles.flag = neTriangle::NE_TRI_TRIANGLE; } iBuffer->Unlock(); _terrain.triangles = _terrainTriangles; _simulator->SetTerrainMesh(&_terrain); //crashes here!
  4. dhanji

    Rather silly Blting question

    what is the error returned from the HRESULT? you have omitted the code that sets up the dd surface, so it is difficult to say what exactly is wrong...
  5. yea all versions of dx sdk come with a plugin for max to convert to x files. But you have to compile it yourself. A better one is the Panda plugin (google for it). Your animations (bone transforms) can be exported in the x file, check the ID3DXSkinInfo interface and the SkinnedMesh sample in the sdk to learn more of the dx side. and yes it will work on managed dx.
  6. dhanji

    Linking error

    you probably created a Windows Console app, when you should be creating a Windows Application.
  7. dhanji

    HLSL Phong shader problem

    I agree, it would make more sense to transform your inNormal to tangent space. Also Im not sure this applies but are you making sure they are both normalized before the cross product?
  8. Are the quads all changing every frame? If not, if its like a 128x96 (x8x8)map of tiled grass or something, why not render the tiles to larger quad(s) and use them as a starting point to render the units (or whatever) on top? lets do some quick calculations: 128 * 8 = 1024 96 * 8 = 768 You can just create yourself a 1024x1024 texture at load time, render all your quads once to this texture, then use it as a starting point every frame (scaling it down vertically of course). You could further optimize it by analyzing tiling boundaries. For instance if one quarter of this giant map is tiled (proly yes because 8x8 all tile) then you can store a 512x512 starting texture and render it 4 times, and this will probably be faster as most cards like smaller textures.
  9. dhanji

    Transformed Vertex?

    well you should read the dx9 sdk docs on world, view, projection transformation. But basically a model is situated in model-space, so all its vertices are relative to its own origin and thus said to be untransformed. If you place it somewhere in the world you need to transform it by the world-matrix. XYZ vertices are said to be untransformed and unlit, meaning that they get put through the transform and lighting pipeline (move it into the world, shade it based on light distance etc.) RHW are XYZ already transformed and lit. This would generally be used for static geometry or if you are performing TnL on the CPU (correct me if Im wrong, but RHW verts wont enter the fixed-function TnL pipeline but CAN be modified by the programmable pixel pipe if Effects are used?)
  10. Hi I am trying to figure out how to sample my shadow mask in the pixelshader. My problem is generating texture coordinates that apply correctly to the shadow mask. here's my procedure: -Render scene of casters to shadow mask from light's view -Save Light's VP matrix -Render scene from camera's view, projecting each pixel using the saved Light VP -Use the projected pixel xy to sample the shadow mask texture my problem is converting the projected pixel into texture coordinates. Currently the scene shadow is offset and thus appears incorrect. Here is the address shader which calculates the texture coordinates: //transform position and calculate pixel depth float4 tPos = mul( float4(InPos, 1), World ); //project pixel into appropriate light-plane tPos = mul(tPos, LightViewProjection); Out.TextureUV.xy = tPos.xy * (-1) ; //clamp to prevent wrapped samples Out.TextureUV = saturate(Out.TextureUV); //calculate pixel position for camera float3 Pos = mul( float4(InPos, 1), (float4x3)World ); Out.Pos = mul(float4(Pos,1), ViewProjection); And based on whether or not there is a colored texel sampled I shade the pixel lit or unlit. I guess what Im really asking is how to convert from the output position from a WVP transform to 0..1 texture coordinate space. I have tried dividing by the viewport width and height (Projection._11 and Projection._22) but this just shifts my projected shadow to a different (more erroneous) position. Any help? PS: Also can u tell me how to figure out what the first pixel (top left) and the last pixel (bottom right) to the backbuffer is in Out.Pos.x,y? If I knew that I could shade my texture coordinates manually.
  11. Im getting these weird lines from my extruded shadow volume. here is the vertex shader: VS_OUTPUT Out = (VS_OUTPUT)0; float3 Pos = mul(float4(InPos, 1), (float4x3)World); float3 Normal = normalize(mul(InNormal, (float3x3)World)); //float4 Pos = mul(float4(InPos, 1), World); Out.TextureUV = InTexCoord; //generate shadow projection vector float3 LightToVertex = -(Pos - LightSource); //calculate if current vertex is on a backface float backFace = dot(LightToVertex, Normal); //extrude backfaces to an arbitrary distance (with smoothing) if (backFace < 0.0) Pos -= (10*LightToVertex) * (-backFace); //transform position to projection-space Out.Pos = mul(float4(Pos, 1), ViewProjection); // position (projected) return Out; I render one pass to the depthstencil using CCW culling (incrementing stencil) and a second with CW culling and decrementing the stencil value. heres a screen of my problems. what could be wrong?
  12. from the directx sdk docs: Quote: Each face in a mesh has a perpendicular normal vector. The vector's direction is determined by the order in which the vertices are defined and by whether the coordinate system is right- or left-handed. The face normal points away from the front side of the face. In Microsoft® Direct3D®, only the front of a face is visible. A front face is one in which vertices are defined in clockwise order. <THERE IS A PICTURE HERE OF A FACE WITH THE NORMAL ON IT> Any face that is not a front face is a back face. Direct3D does not always render back faces; therefore, back faces are said to be culled. You can change the culling mode to render back faces if you want. See Culling State for more information. According to this, as I understand it anyway, a back face is any face that (is not a front face) doesnt have a normal in its direction. This says nothing about the eye point or the camera. From the tests that Ive done in the scene E 0 the inner surface of 0 is culled in a CCW cullmode. and if I switch the state to CW culling (reverse the triangle order) the outer surface becomes invisible. This does not act on the "back" of the model. am I still on the wrong track?
  13. dhanji

    Billboarding

    find the angle between cameralookat and the positive Z-axis (0,0,1). That is the yaw amount for billboarding. Do the same for pitch, thou for games that are on a flat plane (most games) I find yaw is enough.
  14. Quote:Quote:Original post by dhanji If you dont mind losing control over the frame rate can you not use StretchRect on a DirectShow video buffer with the Direct3D backbuffer as source? It wouldnt be pretty of course (like you couldnt take a video while playing at any decent fps) but I should think you could take a snapshot every frame and create a video out of it. I dont think you can. I am trying to do something very similar right now. The problem with this is the back buffer. All I do is save it to file. Now I cant "hack" into the back buffer of another application. (at least it seems) I end up having garbage in the file, although colors match (and size) with the front buffer, although the format seems to be wrong. (its just all clipped and moved around all over) I can upload a sample if someone wants to see it, along with the code. I might be just doing the surface handling wrong though. Any examples of getting a rendertarget or backbuffer and saving the resulting surface to file? Or at least a procedure I should follow? you can save teh backbuffer to a file. Thats cake. use GetRenderTarget or GetRenderTargetData then use D3DXSaveSurfaceToFile. I have done it many times to take screenshots and they turn out fine.
  15. I thought a backface is a normal opposite to the defined normal, i.e. the "back of the face". if u use CCW culling it culls all the backfaces of triangles formed by reading vertices counter-clockwise to generate faces. How is this the same as the back of the model? If u use cull-none u see back and front faces textured (if I get inside a textured sphere I can see the textures no matter where the normals are facing), with CCW depending on ur vertex stream it will either make the inside visible or the outside. CW the opposite. What I mean is being able to see the exterior sphere surface (using a valid cullmode) so that when ur inside it u see nothing. But clipping the pixels that are on the exterior surface but occluded by the convex surface facing you: E 0 [code if E is eye, I mean clipping the right half of 0. as opposed to backface culling (which I take to mean) culling either the outer or inner surface of 0 Did I make a mistake? I am a bit confused...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!