
Announcements
StakFallT
Members
Content count
84 
Joined

Last visited
Community Reputation
308 NeutralAbout StakFallT

Rank
Member

Hoping someone here can help me. I've gotten further with implementing shadow volumes based on John Carmack's Doom 3 work. Yes I know about shadow mapping  basically, to be implemented; one thing at a time... Here's the paper I'm working off of: http://fabiensanglard.net/doom3_documentation/37730293752.pdf Obviously, I've modified it quite a bit. I've even changed much of it from the Cstyle to C++ style; something I'm not overly happy about doing (Usually from what I've seen C stuff runs much faster than C++ style code...)... Anyhow to tame the beast a bit, it had to be done; problems with indexing dynamic pointerstyle arrays with pointer arithmetic were occurring (Not idea if pointer arithmetic even works on nonguaranteed contiguous memory allocations), way way too many parameters were being passed around from function to function, and when one function might yield 2 more calls deep, tracking the proper pointertype passings became miserable  especially since technically I was passing a pointer to a pointerstyle dynamicarray (as opposed to something like a vector or map). Anyhow, I think I narrowed the problem down to the creation of the silhouette. I'm using a simple basic textured cube (2 triangles per face) I exported out of Maya using an exporter I wrote. The cube renders ok, but I'm not seeing the silhouette when I tell the engine to render it. At best I'm able to see a single line shooting way out from one of the cube's points. Even if the windings were correct, even if the created triangle were not closed, I should still see the lines. and I'm not. Here's the function I'm using for generating the silhouette (once the edges have been found and stored that is). Note: Duplicate edges are not stored when the edge list is being generated. Also, the code below is the product of multiple multiple attempts at different methods to try to achieve some kind of indication of vertices being generated properly so not all variables may be being used. Vertex list is the linear form of each face's vertices expanded into the list  duplicate vertices should be expected. unsigned int CreateSilTriangles(cShadowVolume *ShadowVolume, const cMeshVert *VertexList) { unsigned int SilIndex = 0; int i = 0; //Get count of Number of silhouette indices needed... unsigned int SilIndices = 0; for (SilIndex = 0, i = ShadowVolume>nEdges  1; i > 1; i, SilIndex++) { //byte f1 = ShadowVolume>Facing[sil>p1]; //byte f2 = ShadowVolume>Facing[sil>p2]; float f1 = ShadowVolume>Facing[ShadowVolume>Edges[SilIndex].p1]; float f2 = ShadowVolume>Facing[ShadowVolume>Edges[SilIndex].p2]; //The triangles/faces straddling the edge are facing/nonfacing, so it represents a ///edge of the model (In other words, it's an "outside" edge or another way of putting //it  it's an edge that contributes to the model's silhouette. //if (f1 != f2) if ( (f1 > f2)  (f1 < f2) ) SilIndices += 6; }; ShadowVolume>nSilIndices = SilIndices; ShadowVolume>nVertices = ShadowVolume>nSilIndices; ShadowVolume>pSilIndices = new unsigned int[ShadowVolume>nSilIndices]; ShadowVolume>pVertices = new cVector3[ShadowVolume>nVertices]; unsigned int si_Index = 0; for (SilIndex = 0, i = ShadowVolume>nEdges  1; i > 1; i, SilIndex++) { //byte f1 = ShadowVolume>Facing[sil>p1]; //byte f2 = ShadowVolume>Facing[sil>p2]; //Due to a subtle mathlogic issue, the bytes were changed to floats. On a perfectly smooth sphere //where the geodesics of the surface of a sphere gradually wrap around towards the back  mathematically, //this could probably be expressed as, the degree of the surface is expressed as an extremely smooth gradient. //The problem with this is depending on where the points of a triangle line up, the center of that triangle might //be on the "tipping" point of facing/notfacing, while the rearmost point of a triangle_1 may be on the //opposite site of the facingstradle mark so this may create issues in determining facings. Thus when the //difference between the triangle_1 face and triangle_2 face occur, the math may not work out to a //desirable effect. So a float is used to determine how far off triangle_1 and triangle_2 are from each other, //and in the future a sensitivity level could be used to allow a triangle to be part of the //silhouette  allowing for finer control just as cullbits does. float f1 = ShadowVolume>Facing[ShadowVolume>Edges[SilIndex].p1]; float f2 = ShadowVolume>Facing[ShadowVolume>Edges[SilIndex].p2]; //The triangles/faces straddling the edge are facing/nonfacing, so it represents a ///edge of the model (In other words, it's an "outside" edge or another way of putting //it  it's an edge that contributes to the model's silhouette. //if (f1 != f2) if ( (f1 > f2)  (f1 < f2) ) { unsigned int v1 = ShadowVolume>Edges[SilIndex].v1; unsigned int v2 = ShadowVolume>Edges[SilIndex].v2; unsigned int V1 = v1; unsigned int V2 = v2; unsigned int V1_1 = v1 + 1; unsigned int V2_1 = v2 + 1; cVector3 Vec1 = VertexList[ShadowVolume>Edges[SilIndex].v1].Coords; cVector3 Vec2 = VertexList[ShadowVolume>Edges[SilIndex].v2].Coords; cVector3 Vec3 = VertexList[ShadowVolume>Edges[SilIndex].v1].Coords; Vec3 = Vec3 * 500.0f; cVector3 Vec4 = VertexList[ShadowVolume>Edges[SilIndex].v2].Coords; Vec4 = Vec4 * 500.0f; //"To properly determine the regions in space that are in shadow the vertices //of the shadow volume triangles must consistently wind counterclockwise so //that the triangle normals point out of the shadow volume. As such the //triangle winding orders have to be set based on which of the two triangles //faces the light source."  http://fabiensanglard.net/doom3_documentation/37730293752.pdf //if (f1) if (f1 > 0.0f) { /** / si[0] = v1; si[1] = v2 + 1; si[2] = v2; si[3] = v1; si[4] = v1 + 1; si[5] = v2 + 1; /**/ /**/ ShadowVolume>pSilIndices[si_Index] = v1; ShadowVolume>pSilIndices[si_Index + 1] = v2 + 1; ShadowVolume>pSilIndices[si_Index + 2] = v2; ShadowVolume>pSilIndices[si_Index + 3] = v1; ShadowVolume>pSilIndices[si_Index + 4] = v1 + 1; ShadowVolume>pSilIndices[si_Index + 5] = v2 + 1; /**/ /**/ ShadowVolume>pVertices[si_Index] = Vec1; ShadowVolume>pVertices[si_Index + 1] = Vec2 * 500; ShadowVolume>pVertices[si_Index + 2] = Vec2; ShadowVolume>pVertices[si_Index + 3] = Vec1; ShadowVolume>pVertices[si_Index + 4] = Vec1 * 500; ShadowVolume>pVertices[si_Index + 5] = Vec2 * 500; /**/ /** / ShadowVolume>pVertices[si_Index] = Vec1; ShadowVolume>pVertices[si_Index + 1] = Vec2; ShadowVolume>pVertices[si_Index + 2] = Vec3; ShadowVolume>pVertices[si_Index + 3] = Vec2; ShadowVolume>pVertices[si_Index + 4] = Vec4; ShadowVolume>pVertices[si_Index + 5] = Vec3; /**/ } else { /** / si[0] = v1; si[1] = v2; si[2] = v2 + 1; si[3] = v1 + 1; si[4] = v1; si[5] = v2 + 1; /**/ /**/ ShadowVolume>pSilIndices[si_Index] = v1; ShadowVolume>pSilIndices[si_Index + 1] = v2; ShadowVolume>pSilIndices[si_Index + 2] = v2 + 1; ShadowVolume>pSilIndices[si_Index + 3] = v1 + 1; ShadowVolume>pSilIndices[si_Index + 4] = v1; ShadowVolume>pSilIndices[si_Index + 5] = v2 + 1; /**/ /**/ ShadowVolume>pVertices[si_Index] = Vec1; ShadowVolume>pVertices[si_Index + 1] = Vec2; ShadowVolume>pVertices[si_Index + 2] = Vec2 * 500; ShadowVolume>pVertices[si_Index + 3] = Vec1 * 500; ShadowVolume>pVertices[si_Index + 4] = Vec1; ShadowVolume>pVertices[si_Index + 5] = Vec2 * 500; /**/ /** / ShadowVolume>pVertices[si_Index] = Vec1; ShadowVolume>pVertices[si_Index + 1] = Vec2; ShadowVolume>pVertices[si_Index + 2] = Vec4; ShadowVolume>pVertices[si_Index + 3] = Vec3; ShadowVolume>pVertices[si_Index + 4] = Vec1; ShadowVolume>pVertices[si_Index + 5] = Vec4; /**/ }; si_Index += 6; }; }; //return si  shadowIndices; //return si  ShadowVolume>pSilIndices; return si_Index; //return (shadowIndices + si_Index)  shadowIndices; //return shadowVert_Base; }; // End of unsigned int CreateSilTriangles(cShadowVolume *ShadowVolume, const cMeshVert *VertexList) I know there's a winding problem, but even if I disable culling I still don't see the lines being generated (much less the triangles). So even without the culling being an issue, I'm not even sure the generation method is correct. Here's part of the generated log by my engine: [Sun Jun 29 14:05:09 2014] Model's shadow volumevertex list: [Sun Jun 29 14:05:09 2014] (0) X: 0.5 Y: 0.5 Z: 0.5 Vertex Index: 35 [Sun Jun 29 14:05:09 2014] (1) X: 0.5 Y: 0.5 Z: 0.5 Vertex Index: 3 [Sun Jun 29 14:05:09 2014] (2) X: 0.5 Y: 0.5 Z: 0.5 Vertex Index: 4 [Sun Jun 29 14:05:09 2014] (3) X: 3.02486104e034 Y: 0 Z: 0 Vertex Index: 36 [Sun Jun 29 14:05:09 2014] (4) X: 0.5 Y: 0.5 Z: 0.5 Vertex Index: 35 [Sun Jun 29 14:05:09 2014] (5) X: 0.5 Y: 0.5 Z: 0.5 Vertex Index: 4 (and yes I know about the vertex 36 index not being present in the list just below. Like I said though, not even seeing the lines generated from the other 5 points.) EDIT1 (Begin): Correction being made. The output log that shows what vertices are being pushed into the vertex buffer is wrong. It was logging based on VertexList[ShadowVolume>pSilIndices[VertIndex]].Coords.x, y, and z, NOT ShadowVolume>pVertices[VertIndex].x, y, and z. Here's the corrected output: [Sun Jun 29 16:52:51 2014] Model's shadow volumevertex list: [Sun Jun 29 16:52:51 2014] (0) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 16:52:51 2014] (1) X: 250 Y: 250 Z: 250 [Sun Jun 29 16:52:51 2014] (2) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 16:52:51 2014] (3) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 16:52:51 2014] (4) X: 250 Y: 250 Z: 250 [Sun Jun 29 16:52:51 2014] (5) X: 250 Y: 250 Z: 250 EDIT1 (End) Here is part of the model file: NOTE: Vertex list is a raw vertex list  that is, each face's points are expanded into their vertices and placed into this list for each face. Thus, duplicate vertices should be present. *MESH_VERTEX: 0:0.5,0.5,0.5 *MESH_VERTEX: 1:0.5,0.5,0.5 *MESH_VERTEX: 2:0.5,0.5,0.5 *MESH_VERTEX: 3:0.5,0.5,0.5 *MESH_VERTEX: 4:0.5,0.5,0.5 *MESH_VERTEX: 5:0.5,0.5,0.5 *MESH_VERTEX: 6:0.5,0.5,0.5 *MESH_VERTEX: 7:0.5,0.5,0.5 *MESH_VERTEX: 8:0.5,0.5,0.5 *MESH_VERTEX: 9:0.5,0.5,0.5 *MESH_VERTEX: 10:0.5,0.5,0.5 *MESH_VERTEX: 11:0.5,0.5,0.5 *MESH_VERTEX: 12:0.5,0.5,0.5 *MESH_VERTEX: 13:0.5,0.5,0.5 *MESH_VERTEX: 14:0.5,0.5,0.5 *MESH_VERTEX: 15:0.5,0.5,0.5 *MESH_VERTEX: 16:0.5,0.5,0.5 *MESH_VERTEX: 17:0.5,0.5,0.5 *MESH_VERTEX: 18:0.5,0.5,0.5 *MESH_VERTEX: 19:0.5,0.5,0.5 *MESH_VERTEX: 20:0.5,0.5,0.5 *MESH_VERTEX: 21:0.5,0.5,0.5 *MESH_VERTEX: 22:0.5,0.5,0.5 *MESH_VERTEX: 23:0.5,0.5,0.5 *MESH_VERTEX: 24:0.5,0.5,0.5 *MESH_VERTEX: 25:0.5,0.5,0.5 *MESH_VERTEX: 26:0.5,0.5,0.5 *MESH_VERTEX: 27:0.5,0.5,0.5 *MESH_VERTEX: 28:0.5,0.5,0.5 *MESH_VERTEX: 29:0.5,0.5,0.5 *MESH_VERTEX: 30:0.5,0.5,0.5 *MESH_VERTEX: 31:0.5,0.5,0.5 *MESH_VERTEX: 32:0.5,0.5,0.5 *MESH_VERTEX: 33:0.5,0.5,0.5 *MESH_VERTEX: 34:0.5,0.5,0.5 *MESH_VERTEX: 35:0.5,0.5,0.5 Thanks in advance!  StakFallT UPDATE 1: Ok so I figured out a large portion of the problem was because my edge generation/detection function was not logically sound. I reworked it a bit and this is what I have so far: unsigned int Generate_ShadowVolume_EdgeArray(cShadowVolume *ShadowVolume, const cMeshVert *VertPosTable, cMeshFace **linearFaces_IN, const unsigned int NumLinearFaces) { bool Continue_OuterLoop = true; //if (nLinearFaces < 1) // Continue_OuterLoop = false; int TableIndex = 0; int TableIndex2 = 1; std::map<int, cEdge> Temp_Edges; std::map<int, bool> Edge_Is_Duplicate; unsigned int Face1 = 0; unsigned int Face2 = 1; unsigned int Edge_Index = 0; //First, build the edge table  check for redundant edges after. while (Continue_OuterLoop == true) { //Loop through all Faces bool Continue_InnerLoop = true; while (Continue_InnerLoop == true) { //Loop through all Faces unsigned int p1_1 = linearFaces_IN[Face1]>Points[0].vertIDX; unsigned int p1_2 = linearFaces_IN[Face1]>Points[1].vertIDX; unsigned int p1_3 = linearFaces_IN[Face1]>Points[2].vertIDX; cVector3 p1_1_Vec3 = VertPosTable[p1_1].Coords; cVector3 p1_2_Vec3 = VertPosTable[p1_2].Coords; cVector3 p1_3_Vec3 = VertPosTable[p1_3].Coords; unsigned int p2_1 = linearFaces_IN[Face2]>Points[0].vertIDX; unsigned int p2_2 = linearFaces_IN[Face2]>Points[1].vertIDX; unsigned int p2_3 = linearFaces_IN[Face2]>Points[2].vertIDX; cVector3 p2_1_Vec3 = VertPosTable[p2_1].Coords; cVector3 p2_2_Vec3 = VertPosTable[p2_2].Coords; cVector3 p2_3_Vec3 = VertPosTable[p2_3].Coords; unsigned int v1; unsigned int v2; bool Common_Edge_Found = false; if ( (p1_1_Vec3 == p2_1_Vec3) && (p1_2_Vec3 == p2_2_Vec3) ) { v1 = p1_1; v2 = p1_2; Common_Edge_Found = true; } else if ( (p1_2_Vec3 == p2_1_Vec3) && (p1_1_Vec3 == p2_2_Vec3) ) { v1 = p1_2; v2 = p1_1; Common_Edge_Found = true; } else if ( (p1_2_Vec3 == p2_2_Vec3) && (p1_3_Vec3 == p2_3_Vec3) ) { v1 = p1_2; v2 = p1_3; Common_Edge_Found = true; } else if ( (p1_3_Vec3 == p2_2_Vec3) && (p1_2_Vec3 == p2_3_Vec3) ) { v1 = p1_3; v2 = p1_2; Common_Edge_Found = true; } else if ( (p1_3_Vec3 == p2_3_Vec3) && (p1_1_Vec3 == p2_1_Vec3) ) { v1 = p1_3; v2 = p1_1; Common_Edge_Found = true; } else if ( (p1_1_Vec3 == p2_3_Vec3) && (p1_3_Vec3 == p2_1_Vec3) ) { v1 = p1_1; v2 = p1_3; Common_Edge_Found = true; }; if (Common_Edge_Found == true) { Temp_Edges[Edge_Index].FaceA = linearFaces_IN[Face1]; Temp_Edges[Edge_Index].FaceB = linearFaces_IN[Face2]; Temp_Edges[Edge_Index].p1 = Face1; Temp_Edges[Edge_Index].p2 = Face2; Temp_Edges[Edge_Index].v1 = v1; Temp_Edges[Edge_Index].v2 = v2; Edge_Index += 1; Continue_InnerLoop = false; } else { Face2 += 1; if (Face2 >= NumLinearFaces) Continue_InnerLoop = false; else { if (Face2 == Face1) { Face2 += 1; if (Face2 >= NumLinearFaces) { Continue_OuterLoop = false; Continue_InnerLoop = false; }; }; }; }; }; //while (Continue_InnerLoop == true) Face1 += 1; if (Face1 >= NumLinearFaces) Continue_OuterLoop = false; else { Face2 = Face1 + 1; if (Face2 >= NumLinearFaces) { Continue_OuterLoop = false; Continue_InnerLoop = false; }; }; }; //while (Continue_OuterLoop == true) //Initialize Duplicate Flags Array for (int i = 0; i < Temp_Edges.size(); i++) Edge_Is_Duplicate[i] = false; //Now check for duplicate edge entries and mark them accordingly... bool Continue_DuplicateCheck = true; int RunningIndex = 0; //Unaffected by conditionals int Edge1 = 0; int Edge2 = 1; int p1_1, p1_2 = 1; int p2_1, p2_2 = 1; Continue_OuterLoop = true; while (Continue_OuterLoop == true) { //Run through all the edges. p1_1 = Temp_Edges[Edge1].p1; p1_2 = Temp_Edges[Edge1].p2; bool Continue_InnerLoop = true; while (Continue_InnerLoop == true) { //Check if this edge is a duplicate. p2_1 = Temp_Edges[Edge2].p1; p2_2 = Temp_Edges[Edge2].p2; if ( (((p1_1 == p2_1)  (p1_1 == p2_2)) && ((p1_2 == p2_2)  (p1_2 == p2_1)))  ((p1_1 == p2_1) && (p1_2 == p2_2)) ) Edge_Is_Duplicate[Edge2] = true; Edge2 += 1; if (Edge2 >= Temp_Edges.size()) Continue_InnerLoop = false; else { if (Edge2 == Edge1) { Edge2 += 1; if (Edge2 >= Temp_Edges.size()) { Continue_OuterLoop = false; Continue_InnerLoop = false; }; }; }; }; Edge1 += 1; if (Edge1 >= Temp_Edges.size()) Continue_OuterLoop = false; else { Edge2 = Edge1 + 1; if (Edge2 >= Temp_Edges.size()) { Continue_OuterLoop = false; Continue_InnerLoop = false; }; }; }; // while (Continue_OuterLoop == true) //Get a count of how many edge slots will actually be needed. unsigned int nUnique_Edges = 0; unsigned int Tmp_Edge_Index = 0; bool ContinueLoop = true; while (ContinueLoop == true) { if (Edge_Is_Duplicate[Tmp_Edge_Index] == false) nUnique_Edges += 1; Tmp_Edge_Index += 1; if (Tmp_Edge_Index >= Temp_Edges.size()) ContinueLoop = false; }; ShadowVolume>Edges = new cEdge [nUnique_Edges]; Edge_Index = 0; Tmp_Edge_Index = 0; ContinueLoop = true; while (ContinueLoop == true) { if (Edge_Is_Duplicate[Tmp_Edge_Index] == false) { ShadowVolume>Edges[Edge_Index].p1 = Temp_Edges[Tmp_Edge_Index].p1; ShadowVolume>Edges[Edge_Index].p2 = Temp_Edges[Tmp_Edge_Index].p2; ShadowVolume>Edges[Edge_Index].FaceA = Temp_Edges[Tmp_Edge_Index].FaceA; ShadowVolume>Edges[Edge_Index].FaceB = Temp_Edges[Tmp_Edge_Index].FaceB; ShadowVolume>Edges[Edge_Index].v1 = Temp_Edges[Tmp_Edge_Index].v1; ShadowVolume>Edges[Edge_Index].v2 = Temp_Edges[Tmp_Edge_Index].v2; Edge_Index += 1; Tmp_Edge_Index += 1; if (Tmp_Edge_Index >= Temp_Edges.size()) ContinueLoop = false; } else { Tmp_Edge_Index += 1; if (Tmp_Edge_Index >= Temp_Edges.size()) ContinueLoop = false; }; }; //Edge_Index is not 1'd because Edge_Index starts at 0 and only increments after an edge has been added. ShadowVolume>nEdges = nUnique_Edges; return ShadowVolume>nEdges; }; problem is, it's the shadow volume is still off. Here's a couple of screenshots. For reference, here's the raw vert positions being pushed into the vertex buffer: [Sun Jun 29 19:27:58 2014] Model's shadow volumevertex list: [Sun Jun 29 19:27:58 2014] (0) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (1) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (2) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (3) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (4) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (5) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (6) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (7) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (8) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (9) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (10) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (11) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (12) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (13) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (14) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (15) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (16) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (17) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (18) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (19) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (20) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (21) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (22) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (23) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (24) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (25) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (26) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (27) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (28) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (29) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (30) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (31) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (32) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (33) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (34) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (35) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (36) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (37) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (38) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (39) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (40) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (41) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (42) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (43) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (44) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (45) X: 1 Y: 1 Z: 1 [Sun Jun 29 19:27:58 2014] (46) X: 0.5 Y: 0.5 Z: 0.5 [Sun Jun 29 19:27:58 2014] (47) X: 1 Y: 1 Z: 1 I'm thinking it's got something to do with the silhouette generation which was posted above, only I made the change of instead of the vectors being multiplied by 500, I brought it down to 2.0f.

Shadow Volumes  Edge finding
StakFallT replied to StakFallT's topic in Graphics and GPU Programming
Woah. The man himself! :) Thanks! 
Shadow Volumes  Edge finding
StakFallT replied to StakFallT's topic in Graphics and GPU Programming
Well I think what I've just experienced was a serious herpderp moment. Furthering my previous reply: of course the "solution sets" MUST be the same! If you compared vertex index 100, 200, and 300 against vertex indices 50, 150, and 250 it wouldn't make sense because the vertices aren't even in the same position in space and if they were and they were just redundant vertex entries, technically it'd be a hole (despite overlapping) in the model! i.e. you don't check an edge shared by two triangles by checking two completely different sets of vertices otherwise it's no longer a shared edge lol! ::doh:: So the assumption made is the 3 points checked two of them must be in another triangle for the edge to be shared and it's quicker to check the index value rather than the float values. I kinda figured my brain was being dense lol I think building the extra indices would actually cause a problem, because if vertex 1 is 0, 0, 0 (making it easy) and vertex (say) 53 is 0, 0, 0. Sure the values are the same, but the indexes are different. Sure you can check the value to see if they're the same, but not only is that slower, but you've introduced the implicit problem that the points aren't actually technically the same.They merely just overlap exactly. Like drawing a line on a piece of paper overtop of the same line, sure it looks like one line, but technically there are two lines there and I imagine that might cause some issues with the algorithm. EDIT 1: ([i]Rather than create a 2nd follow up post in a row, I figured I'd do this in the form of an edit[/i]) And what is the deal with Edge **edgeArray? The function definition is: long BuildEdges(long vertexCount, long trianglecount, const Triangle *trianglearray, Edge **edgeArray) and further in the function it does: Edge *edge = &edgeArray[edgeCount; edge>vertexIndex[0] = unsigned short) i1; ... according to the book, edgeArray: "A pointer to a location in which a pointer to the edge array is returned (Lengyel; p.296).", the parameters, and the fact that edgeequal an element in edgeArray (Which appears to be already allocated), this variable is supposed to already be allocated, yet the book says "The return value is the number of edges written to the array edgeArray (Lengyel; p.296)." How can it write to memory without it already being allocated? How do I know it's not allocated yet? Because the function returns how many edges were written to it... If it's not allocated then the above code is going to create an access violation. 
Shadow Volumes  Edge finding
StakFallT replied to StakFallT's topic in Graphics and GPU Programming
hmm ok so if I understand this correctly, it's true that the actual index number has no real meaning;but it's also true that the function doesn't pretend it to be and that the only reason the function really actually works is because the index comparisons being made are the same set, only in opposite directions. If i'm understanding this correctly, it's like saying we're comparing A, B, and C  we don't really care what A, B, and C are equal to or what vertex in the list A, B, and C's values are referring to, just as long as the two triangles are using AB, BC, or CA (and BA, CB, or AC). I guess maybe a better way of putting it would be to say, so long as Edge 1 and Edge 2 of Triangle 1 and Triangle 2 use the same solution set a comparison can be made, and whether one is less than or greater than has more to do with winding than actually caring about the value itself. I think what I wasn't getting was that I kept thinking if Triangle 1 has indices: 100, 200, and 300 and Triangle 2 has indices: 50, 150 and 250 how does comparing the two sets of indices help? In that case, it doesn't. The indices of any edge comparisons made must be the same (Obviously in different directions  hence > or < helps here for that, but the same nonetheless). Something else that doesn't help is he's using "long a" to refer to the outer loop of all the triangles, yet just below he uses b to refer to the points of the triangle, making them seem related in purpose. He should've renamed a to something like TriangleNum or something, then when he does: edge>faceIndex[0] = (unsigned short) a; edge>faceIndex[1] = (unsigned short) a; it would read: edge>faceIndex[0] = (unsigned short) TriangleNum; edge>faceIndex[1] = (unsigned short) TriangleNum; and "long b" to something like "long PointNum" much easier to understand and separates a's purpose from b. Which is obvious on the surface but when someone think's they're missing something and that's why they don't understand something, they start looking harder and trying to pull things out from the material. 
Shadow Volumes  Edge finding
StakFallT replied to StakFallT's topic in Graphics and GPU Programming
Ok I think I follow that a little bit better, because even if you have two triangles that share an edge, if the normals to both faces are different, then it means you're at the edge of the model where it starts "wrapping" around toward the back; basically you're leveraging the property of normals being inverse of each other combined with the shared edge so as to determine which edges in the entire model are at that "wraparound" part. Ok that makes sense, but what about the actual indices? In the function Eric basically checks the index numbers. Here's a short blurb of what I'm referring to... Hope this is ok... for (long a = 0; a < triangleCount; a++) { long i1 = triangle>index[2]; for (long b =0; b < 3 b++) { long i2 = triangle>index[b]; if (i1 < i2) { ... If you simulate an example, let's say that triangle>index[2] = 103 and b = 0 (following it further > long i2 = triangle>index[0] = 204) What difference does it matter if i1 = 103 (for example) and i2 = 204 (again, for example) 103 and 204 only refer to entry numbers in a vertex list not the actual contents, how is checking 103 vs 204 really helpful? I could see if checking the vector at entry 103 vs the vector at entry 204 would be helpful, but the actual index numbers? Is this implicitly assuming that the order of vectors in the globalvectorlist (that are referred to by the model's faces) are in some kind of special order? (NOTE: I'm not referring to the point order that make up a face, obviously that does matter, what I'm referring to are the actual index numbers themselves) If that's the case, then yeah I could see 103 and 204 actually having some sort of implicit yet significant meaning. Sorry if I'm being dense. I feel like this is something so simple, just I'm not there yet...Thanks so far btw! P.S. Yeah that article is almost an exact copy out of his book, only main difference it seems is he left the function code and the GL shader code out of the article. EDIT: Various edits in an attempt to articulate my point more clearly, also a couple of spelling mistakes 
Hoping someone could help me understand the algorithm to finding edges. I've been reading and rereading about shadow volumes and having a really tough go of it for some reason. Namely in the edge finding algorithm. The latest material I'm looking over comes from the book "Mathematics for 3D Game Programming and Computer Graphics" (3rd Edition) by Eric Lengyel. In it he displays the code behind a BuildEdges function. But he does what I've seen done so many other times with Shadow Volumes. He seems to base some stuff of the vertexindex of the points on a triangle and he furthers this even further by checking if the index number is greater than or less than the previous one. Um... what?. The index number, from my personal experience, is just a rudimentary number that correlates to a 3floatvalue (x, y, z) in a vertextable list that only lists vertex values once  that is, a unique vertex list. But even if the actual vertexlist was not one of uniqueness of it's entries, what good would checking an index number's ordinal value be? From what I can tell, the indices (The order of the vertices in that list) has no rhyme or reason they're just in there in the order the 3D modeling program sees as it's looping through the geometry to export. I don't really understand how the index's number is helpful...If someone could shed some light on this, I'd really appreciate it; thanks! P.S. I'd post the function, but I'm not sure if that's something the publisher would mind. I can't see why they wouldn't given it's only one function is a fairly thick book, but I'll defer that to the board mods.  StakFallT

Bizarre D3D9 hResult error: "The drive cannot find the sector requested."
StakFallT replied to StakFallT's topic in Graphics and GPU Programming
Even though it's not for the XBox and isn't written using XNA? 
Bizarre D3D9 hResult error: "The drive cannot find the sector requested."
StakFallT posted a topic in Graphics and GPU Programming
Anyone ever get this error: "hr = 0x0000001b The drive cannot find the sector requested." when calling something as simple as the DrawPrimitive function? I'm getting this error in my engine's worldbuilder being written in C#. The engine is written in C++ and so I used SWIG to create the wrapper interface to bridge betwen native C++ and C#. I kept noticing the call stack errored in nvd3dum.dll (nVidia card for anyone that couldn't tell)  was getting an unhandled exception to be more specific. But when I went poking around in the various threads, I noticed my engine's thread was sitting at a DrawPrimitive call, so I bp'ed on it and spammed the f5 key a bunch; low and behold I eventually found it to be yielding that error upon checking the result of call. My first instinct was to run scandisk which I did and it gave me a clean bill of health with no errors discovered. Not that I think it matters but I'm running Windows 8.1 just incase anyone was wondering. The weird thing is, all myviewports for the various view windows render at least once just fine but then all of a sudden the app just hits a brick wall on this draw call... Any thoughts on how to dig deeper on this? Thanks!  StakFallT 
OpenGL OpenGL Light Positions to D3D
StakFallT replied to StakFallT's topic in Graphics and GPU Programming
@C0lumbo: Yup, that's exactly what it's doing. Just I saw the w in there, for lighting, and wasn't sure what to do with it.the setting w to 1 or 0 though was the info I was after :) Btw, what about if it's a spotlight? I'm guessing I'd set w to 0 like I would a directional? Shadow Mapping is being favored over volumes? I thought the volume technique came out after mapping? @Hodgman: Well there's some optimization stuff in that first paper and it does mention using some hardware here and there throughout and if it's available; I'm not really sure where the code in that first paper matches and/or differs with the Doom 3 source. My guess is, it's some sort of hybrid  that is, use hardware if available, otherwise do it on the cpu. Btw, thanks guys for the quick replies! 
Hi Gamdev.net, I have a question that I'm hoping can be answered. I'm starting work on implementing shadow volumes in my game engine and I've been researching the topic a little here and there. I came across a PDF (located here: http://fabiensanglard.net/doom3_documentation/37730293752.pdf) that disects part of the Shadow Volume code in the Doom 3 source. I tend to learn by getting something to work and then fiddiling with it until I completely understand it, so I'm trying to implement so I can master the concept before throwing it out and rewriting it, but I'm struggling with it a bit because the code seems to use quaternions for light rotations. This wouldn't be much of an issue except that everything that Direct3D 9 uses utilizes simply an x, y, and z value. There is a theta value... Question 1) When I issue, to Direct3D 9, a statement to set a light's position or direction, are each of the components (x, y, and z) considered separate rotational vectors that have been formed by other values calculated internally by Direct 3D? Hard to explain but if you goto http://www.cs.princeton.edu/~gewang/projects/darth/stuff/quat_power.html and search in the document for the section Euler to Quaternion you'll see what I mean. I mean are the x, y, and z values some independant set of rotation values for each axis (Like some single value that encompasses the y and z rotations for x, the x and z rotations for y, and the x, and y rotations for z), or are x, y and z really just that and I'm overthinking them? If they really are just simple values to be treated, for example, as x is to be accompanied with y and z make up the entire rotation at facevalue then how would one get a w value to use with the implementation presented in that paper I linked at the begining of the post? And how would this algorithm work with point lights? In that paper, the part that I'm referring to is on page 15 in Appendix A: /* Calculating Triangle Facing Copyright (C) 2005 Id Software, Inc. Written by J.M.P. van Waveren This code is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This code is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. */ struct Vec4 { float x, y, z, w; }; struct Plane { float a, b, c, d; }; void CalculateFacing( const Plane *planes, const int numTriangles, const Vec4 &light, byte *facing ) { int i; for ( i = 0; i < numTriangles; i++ ) { facing[i] = planes[i].a * light.x + planes[i].b * light.y + planes[i].c * light.z + planes[i].d * light.w > 0.0f; } facing[numTriangles] = 1; // for dangling edges to reference } If I'm using D3D 9 lights can I just set light.w to 1.0f or 0.0f or would that have some bad mathematical effect? Is there some trick way that if D3D really does just use x, y, and z, and OpenGL uses Quaternions that I can make either one work or do I really need a conditional to check what API I'm using? I know it may seem like a basic question, but I've never really had to think this hard about how D3D actually handles it's directions or positions of lights. It's never ever been a problem up until now, now that I'm comparing it to OpenGL stuff. I would try reaching out to John Carmack on this but he's probably not exactly the most accessible guy :P Thanks in advance!  StakFallT

Creating volumetric mesh data from polygonal vertex set
StakFallT replied to StakFallT's topic in Math and Physics
Sorry to bother, but I've had to some time to work with the algorithm a bit, and I have a couple of last min. questions... 1) This would NOT be suitable for animated meshes  at least without some kind of trick right? (There's just too much math for each testpoint during voxel generation)  as the naive (sp?) approach would be to regenerate the voxels every frame. I can't really picture a need to do this, but was just something that dawned on me. I guess, not to use a morbid example  and I definitely apologize for doing so, but like if you took a ragdoll skeleton and wanted to like shoot an arm off off or something, but that's really the only example of an animated mesh I could think of where voxels could be used. Most destructable things are solid nonliving objects (buildings, tanks, etc.) and even though some nonliving things "animate" (like tanks), generally speaking, they either move, rotate, or scale as a collective whole of triangles. I suppose, also, anything nonliving that might move on it's own... something flexible that might animate on their own (maybe by wind or something) would also be a candidate for onthefly voxels  only thing that comes to mind would be a flagpole or a tree swaying in the wind. 2) What about scaling of meshes? When I scale my models, it's merely visualonly (since scaling all vertices and reupdating the vertex buffers ondemand would kinda be out of the question). I say this because, if the mesh gets bigger, the vertex data doesn't reflect this, only the model matrix does which is eventually passed to D3D; what this means is the voxels are calculated off the original vertices and as such if the mesh gets bigger, there won't be enough voxels to fill the inside of the model since the voxels being generated are calculated off the original smaller triangles... Would I create a second set of vertex/face data that scale when voxels are generated to counter that problem or is there a method that entails less overhead? I've heard of a method that some people make sophisticated usage of billboards to achieve voxels in general, but I'm not sure how that would really work  but for any kind of deformation would definitely require some spaghetti code I'm thinking (As you'd then have to convert the billboards into cubes otherwise at certain camera angles the voxels, being billboards, would "disappear"). 
Creating volumetric mesh data from polygonal vertex set
StakFallT replied to StakFallT's topic in Math and Physics
Been looking over the implementation, thanks! The code is soooo clean... I follow it, just wish I could visualize the formulas in my head better  though, that usually comes overtime through working with them. Thanks again! I think I'm on my way now! :) 
Creating volumetric mesh data from polygonal vertex set
StakFallT replied to StakFallT's topic in Math and Physics
well I was saying a vertex because a vertex represents the very end of triangle, it's impossible for something on a face to be passed all of the encompassing 3 vertices (one of those 3 will be the furthest point), but I see what you're saying about an edge  if 1 vertex is dismissed and you're down to two, and those two happen to be equidistant (side the side) from said point, then the mid point of that edge would be closer. Thanks btw! EDIT 1: It's taking some time to go through the algorithm; been a while since I read through a math paper but I think I'm starting to understand parts of it. Currently, my question on it is, what is "B"? here's the text I'm referring to: "where a = E0 * E0, b = E0 * E1, c = E1 * E1, d = E0 * (B  P), e = E1 * (B  P), and f = (B  P) * (B  P)." I get that the Es are the edges, and P is the arbitrary point you're testing, but what is B? The paper doesn't seem to explain that. Also, s and t I'm guessing are the 2d coords of the triangle? So how do I flatten a 3d tri into 2d? Would I use barycentric coords? 
Creating volumetric mesh data from polygonal vertex set
StakFallT replied to StakFallT's topic in Math and Physics
By the sounds of it, the gist is: 1) Create a bounding box 2) Select an arbitrary point that is inside that box 3) Take that arbitrary point and locate the closest triangle (By checking the distance of it's vertices) to said arbitrary point 4) Take that triangle's normal and test it against the arbitrary point and if  it's inside, if + it's outside, if 0 it's on the surface. 5) If , or 0 create a cube at that arbitrary point repeat steps 2 through 5 
Creating volumetric mesh data from polygonal vertex set
StakFallT posted a topic in Math and Physics
After doing a bunch of searching, I'm drowning in info that isn't quite what I'm looking for; so I'm hoping someone here can at least point me in the right direction. Unfortunately though, it's probably not a quick answer and to really get across where I'm stuck, I need to elaborate a bit... I'm looking to add voxelation to my game engine for destructibles and deformable objects (like terrain). Problem is, I see TONS of information (as I said, I'm drowning in info) about generating a mesh from (implicitly already created) voxel data (Things like the Marching Algorithm and various versions of it keep coming up), or calculating the volume of a mesh, or creating a giant cube of voxels from the ground up. In laymen's terms, I need to go the other way  that is, create the cubes of the inside of an already existing model that I made  that is nothing more than a series of verices, and faces that have vertex indices. Currently, I can create cubes centered on the vertices of a mesh, but that is only voxels being placed at the vertices  i.e. nothing about the surfaces, much less anything about the inside of the mesh. I've heard of techniques that reference raycasting (see: [url]http://blog.wolfire.com/2009/11/Trianglemeshvoxelization[/url]). In that example, the developer implicitly KNOWS which triangle to cast to. A little aside about the raycasting technique: In my understanding (and implementation), this would work similarly like scene picking would. In scene picking, your geometric ray starts at the near plane, cast towards the far plane (which is a discrete value  i.e. there's no way of just saying, "cast to lambda and event handler tell me when it hits something" lol)  you determine the selection by starting with a superset consisting of ALL triangles (again, a discrete value)  that are part of objects you want to be selectable  then you cycle through each triangle and determine if a geometric ray is within the bounds of the polygon (For thoroughness, you can calculate the hit coords and return them), then you store any face that passed the hit test and check the calculated hitdepth values, whichever one is the smallest is the face that was actually clicked and ultimately the mesh your looking to highlight. Pretty straight forward stuff, but you start with a discrete set and work backwards towards a narrow subset. At no time does it matter how one face is oriented to the other. With the raycasting technique, you kinda, albeit, implicitly need to absolutely know which triangle is directly across (meaning a change in only a position's single axis) otherwise you wind up crisscrossing all over the place inside the mesh, even if you just picked one triangle and raycasted all others to it, I'm not sure that would work either. Only thing I can think of is pick a face, and loop through all other faces until you find a face with an inverse normal (within deviation)  but I'd kinda like to do it scanline style, like the wolfire link I posted earlier, where only a single axis (just x, just y, or just z) is changed. Surely, there are 3D modelling programs out there that can voxelize the mesh upon requesting it  despite you rendering it as a trianglebased mesh... How do they do it? Any help would greatly be appreciated! In the meantime, I'll probably be thinking about this on and off for a while yet... I may update this post with edits to make it more readable, or thoughts I have along the way prior to a reply. EDIT 1: I think I may understand it... You create a bounding box of the mesh, loop through every face, cast in single direction to bounding box edge, each face gets looped to see if any faces are collided with. If they are, you create cubes of specified size along the geometric ray to that collided face. Am I right? It seems the missing puzzle piece was I forgot about the bounding box.  StakFallT