Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

5039 Excellent

1 Follower

About cozzie

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi. I personally don’t know the answer, perhaps the other people who saw your post so far also didn’t. Perhaps you’re not that stupid and it’s a difficult question. tbh, I find it a hit harsh to burn down the forum because you feel your question’s not answered quick enough. Speaking from experience, the forum and a lot of people on it are pretty helpful.
  2. This is mostly just a decision initially taken when a Library is created. Which could also be because the lib is initially used with dx or OpenGL (which have different handedness). In the end there’s no right or wrong; most important is to stick with one throughout everything you do.
  3. cozzie

    Typedefing in a central *.h file

    No problem if you ask me, I do the same with uint (unsigned int 32), my IO folders etc. It's actually not more then 15 lines of code, so not getting out of hand. I use the forced include option in Visual Studio to make sure it's always there.
  4. Short update: managed to get the version as in Realtime collision, working. The issue was that the extents in my OBB werent half, multiplying by 0.5 solved that implementation. In my testcase I also made one of the 2 cubes static, to check if the other implementation suffers from 0 cross product issues, but still doesn’t work. Now figuring out what else could be the cause. Any thoughts appreciated. Other implementation also solved, static POS..... was only initialized once within the collision detection function.
  5. Thanks both, I'll try to debug and see what's returned for which axis I check. My test case are 2 cubes with are besides each other (0, -3, 0 and 0, -3 and - 1.5), which are rotating in the same direction, meaning all axis's are parallel. This might be the reason why implementation 1 doesn't work (no epsilon correction for 0 cross product). I'll try this one first, since it's a quick fix/ test. Or the other way around, disable rotation for one of the two cubes, so I don't have the potential 'parallel' issue'. @Randy Gaul true, but there's always a risk of copy/ pasting and making it 'fit' to my codebase.
  6. Hi all, I've been struggling to get my OBB - OBB intersection test working, using the Separating Axis theorem. After doing 2 different implementations, the 1st one always returns false, and the other implementation always returns true. Debugging shows valid OBB input passed to the functions (verified by debug drawing the OBB's by the renderer). Any input on what I might be doing wrong, is really appreciated. bool CBaseCollision::OBBOBBIntersect(const CR_OBB &pOBB1, const CR_OBB &pOBB2) { static CR_VECTOR3 rPos = pOBB2.Center - pOBB1.Center; CR_VECTOR3 xAxis1 = pOBB1.XHalfExtent; xAxis1.Normalize(); CR_VECTOR3 yAxis1 = pOBB1.YHalfExtent; yAxis1.Normalize(); CR_VECTOR3 zAxis1 = pOBB1.ZHalfExtent; zAxis1.Normalize(); CR_VECTOR3 xAxis2 = pOBB2.XHalfExtent; xAxis2.Normalize(); CR_VECTOR3 yAxis2 = pOBB2.YHalfExtent; yAxis2.Normalize(); CR_VECTOR3 zAxis2 = pOBB2.ZHalfExtent; zAxis2.Normalize(); if( SeparatingPlaneExists(rPos, xAxis1, pOBB1, pOBB2) || SeparatingPlaneExists(rPos, yAxis1, pOBB1, pOBB2) || SeparatingPlaneExists(rPos, zAxis1, pOBB1, pOBB2) || SeparatingPlaneExists(rPos, xAxis2, pOBB1, pOBB2) || SeparatingPlaneExists(rPos, yAxis2, pOBB1, pOBB2) || SeparatingPlaneExists(rPos, zAxis2, pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(xAxis1, xAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(xAxis1, yAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(xAxis1, zAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(yAxis1, xAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(yAxis1, yAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(yAxis1, zAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(zAxis1, xAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(zAxis1, yAxis2), pOBB1, pOBB2) || SeparatingPlaneExists(rPos, CMathHelper::CrossVec3(zAxis1, zAxis2), pOBB1, pOBB2)) return false; return true; } bool CBaseCollision::SeparatingPlaneExists(const CR_VECTOR3 &pPos, const CR_VECTOR3 &pAxis, const CR_OBB &pObb1, const CR_OBB &pObb2) { float baseDist = fabs(CMathHelper::DotProductVec3(pPos, pAxis)); float compDist = fabs(CMathHelper::DotProductVec3(pObb1.XHalfExtent, pAxis)) + fabs(CMathHelper::DotProductVec3(pObb1.YHalfExtent, pAxis)) + fabs(CMathHelper::DotProductVec3(pObb1.ZHalfExtent, pAxis)) + fabs(CMathHelper::DotProductVec3(pObb2.XHalfExtent, pAxis)) + fabs(CMathHelper::DotProductVec3(pObb2.YHalfExtent, pAxis)) + fabs(CMathHelper::DotProductVec3(pObb2.ZHalfExtent, pAxis)); return(baseDist > compDist); } Implementation 2, based on the Realtime collision book implementation: bool CBaseCollision::OBBOBBIntersect(const CR_OBB &pOBB1, const CR_OBB &pOBB2) { float ra, rb; CR_VECTOR3 u_obb1[3]; CR_VECTOR3 u_obb2[3]; u_obb1[0] = pOBB1.XHalfExtent; u_obb1[0].Normalize(); u_obb1[1] = pOBB1.YHalfExtent; u_obb1[1].Normalize(); u_obb1[2] = pOBB1.ZHalfExtent; u_obb1[2].Normalize(); u_obb2[0] = pOBB2.XHalfExtent; u_obb2[0].Normalize(); u_obb2[1] = pOBB2.YHalfExtent; u_obb2[1].Normalize(); u_obb2[2] = pOBB2.ZHalfExtent; u_obb2[2].Normalize(); float e_obb1[3] = { pOBB1.Extents.x, pOBB1.Extents.y, pOBB1.Extents.z }; float e_obb2[3] = { pOBB2.Extents.x, pOBB2.Extents.y, pOBB2.Extents.z }; // compute rotation matrix expressing OBB2 in OBB1's coordinate frame float tR[3][3]; for(int i=0;i<3;++i) { for(int j=0;j<3;++j) { tR[i][j] = CMathHelper::DotProductVec3(u_obb1[i], u_obb2[j]); } } // compute translation vector t CR_VECTOR3 orgt = pOBB2.Center - pOBB1.Center; float t[3]; // bring translation into 1's coordinate space t[0] = CMathHelper::DotProductVec3(orgt, u_obb1[0]); t[1] = CMathHelper::DotProductVec3(orgt, u_obb1[2]); t[2] = CMathHelper::DotProductVec3(orgt, u_obb1[2]); // compute common subexpressions. Add epsilon, to prevent cross product being 0 for parallel vectors float tAbsR[3][3]; for(int i=0;i<3;++i) { for(int j=0;j<3;++j) { tAbsR[i][j] = abs(tR[i][j]) + 0.0002f; } } // test axes L = A0 / A1 / A2 for(int i=0;i<3;++i) { ra = e_obb1[i]; rb = e_obb2[0] * tAbsR[i][0] + e_obb2[1] * tAbsR[i][1] + e_obb2[2] * tAbsR[i][2]; if(fabs(t[i]) > ra + rb) return false; } // test axes L = B0 / B1 / B2 for(int i=0;i<3;++i) { ra = e_obb1[0] * tAbsR[0][i] + e_obb1[1] * tAbsR[1][i] + e_obb1[2] * tAbsR[2][i]; rb = e_obb2[i]; if(fabs(t[0] * tR[0][1] + t[1] * tR[1][i] + t[2] * tR[2][i]) > ra + rb) return false; } // Test axis L = A0 x B0 ra = e_obb1[1] * tAbsR[2][0] + e_obb1[2] * tAbsR[1][0]; rb = e_obb2[1] * tAbsR[0][2] + e_obb2[2] * tAbsR[0][1]; if(fabs(t[2] * tR[1][0] - t[1] * tR[2][0]) > ra + rb) return false; // Test axis L = A0 x B1 ra = e_obb1[1] * tAbsR[2][1] + e_obb1[2] * tAbsR[1][1]; rb = e_obb2[0] * tAbsR[0][2] + e_obb2[2] * tAbsR[0][0]; if(fabs(t[2] * tR[1][1] - t[1] * tR[2][1]) > ra + rb) return false; // Test axis L = A0 x B2 ra = e_obb1[1] * tAbsR[2][2] + e_obb1[2] * tAbsR[1][2]; rb = e_obb2[0] * tAbsR[0][1] + e_obb2[1] * tAbsR[0][0]; if(fabs(t[2] * tR[1][2] - t[1] * tR[2][2]) > ra + rb) return false; // Test axis L = A1 x B0 ra = e_obb1[0] * tAbsR[2][0] + e_obb1[2] * tAbsR[0][0]; rb = e_obb2[1] * tAbsR[1][2] + e_obb2[2] * tAbsR[1][1]; if(fabs(t[0] * tR[2][0] - t[2] * tR[0][0]) > ra + rb) return false; // Test axis L = A1 x B1 ra = e_obb1[0] * tAbsR[2][1] + e_obb1[2] * tAbsR[0][1]; rb = e_obb2[0] * tAbsR[1][2] + e_obb2[2] * tAbsR[1][0]; if(fabs(t[0] * tR[2][1] - t[2] * tR[0][1]) > ra + rb) return false; // Test axis L = A1 x B2 ra = e_obb1[0] * tAbsR[2][2] + e_obb1[2] * tAbsR[0][2]; rb = e_obb2[0] * tAbsR[1][1] + e_obb2[1] * tAbsR[1][0]; if(fabs(t[0] * tR[2][2] - t[2] * tR[0][2]) > ra + rb) return false; // Test axis L = A2 x B0 ra = e_obb1[0] * tAbsR[1][0] + e_obb1[1] * tAbsR[0][0]; rb = e_obb2[1] * tAbsR[2][2] + e_obb2[2] * tAbsR[2][1]; if(fabs(t[1] * tR[0][0] - t[0] * tR[1][0]) > ra + rb) return false; // Test axis L = A2 x B1 ra = e_obb1[0] * tAbsR[1][1] + e_obb1[1] * tAbsR[0][1]; rb = e_obb2[0] * tAbsR[2][2] + e_obb2[2] * tAbsR[2][0]; if(fabs(t[1] * tR[0][1] - t[0] * tR[1][1]) > ra + rb) return false; // Test axis L = A2 x B2 ra = e_obb1[0] * tAbsR[1][2] + e_obb1[1] * tAbsR[0][2]; rb = e_obb2[0] * tAbsR[2][1] + e_obb2[1] * tAbsR[2][0]; if(fabs(t[1] * tR[0][2] - t[0] * tR[1][2]) > ra + rb) return false; return true; }
  7. Thanks, with this input and a lot of chat on the Discord GD, I almost solved it. After some addition hour(s) of debugging the last issue seemed to be the order of matrix multiplication (Basis vectors and translation). All working like a charm now. For future reference/ others, here's the fixed code: bool CDebugDraw::AddArrow(const CR_VECTOR3 &pPos1, const CR_VECTOR3 &pPos2, const CR_VECTOR3 &pColor, const bool pDepthEnabled, const bool pPersistent, const uint64_t pLifeTime) { if(!AddLine(pPos1, pPos2, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; /* p1 / | \ / | \ / | \ p2 -----|-----p3 | | | | p0 */ // form new base vectors CR_VECTOR3 e3 = pPos2 - pPos1; e3.Normalize(); CR_VECTOR3 e1 = CMathHelper::CrossVec3(e3, CR_VECTOR3(0.0f, 1.0f, 0.0f)); // how about arrow direction 0 1 0 ? e1.Normalize(); CR_VECTOR3 e2 = CMathHelper::CrossVec3(e1, e3); // fill basis vectors into our 'B' matrix (transposed to work with Mathhelper/DXmath) CR_MATRIX4X4 bMat = CR_MATRIX4X4(CR_VECTOR4(e1, 0.0f), CR_VECTOR4(e2, 0.0f), CR_VECTOR4(e3, 0.0f), CR_VECTOR4(0.0f, 0.0f, 0.0f, 1.0f)); // translation matrix CR_MATRIX4X4 transMat; transMat.m41 = pPos2.x; transMat.m42 = pPos2.y; transMat.m43 = pPos2.z; transMat.m44 = 1.0f; // multiply B and translation CR_MATRIX4X4 finalMat; finalMat = CMathHelper::MatrixMultiply(bMat, transMat); // model space arrowhead lines CR_VECTOR3 p1 = CR_VECTOR3(0.0f, 0.0f, 0.0f); CR_VECTOR3 p2 = CR_VECTOR3(-0.1f, 0.0f, -0.1f); CR_VECTOR3 p3 = CR_VECTOR3(0.1f, 0.0f, -0.1f); // transform them p1 = CMathHelper::TransformVec3Coord(p1, finalMat); p2 = CMathHelper::TransformVec3Coord(p2, finalMat); p3 = CMathHelper::TransformVec3Coord(p3, finalMat); // draw lines if(!AddLine(p1, p2, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddLine(p1, p3, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; return true; }
  8. Hi guys, I've been struggling to create a function to draw an arrow with 2 lines making up the arrowhead. The challenge is that I somehow can't get the rotation right. I've pasted the code below and an image of the result. Findings so far, when I replace the 3x3 part of the transform matrix to identity, I get the same visual result. Also when I switch colums A and C in the matrix, this specific arrow looks good (with is pointing in the positive X direction). Any input would be appreciated. bool CDebugDraw::AddArrow(const CR_VECTOR3 &pPos1, const CR_VECTOR3 &pPos2, const CR_VECTOR3 &pColor, const bool pDepthEnabled, const bool pPersistent, const uint64_t pLifeTime) { if(!AddLine(pPos1, pPos2, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; /* p1 ------- X / | \ | / | \ | / | \ | p2 -----|-----p3 | Z | | | | p0 */ CR_VECTOR3 arrowDir = pPos2 - pPos1; arrowDir.Normalize(); // model space CR_VECTOR3 p1 = CR_VECTOR3(0.0f, 0.0f, 0.0f); CR_VECTOR3 p2 = CR_VECTOR3(0.2f, 0.0f, -0.2f); CR_VECTOR3 p3 = CR_VECTOR3(-0.2f, 0.0f, -0.2f); // transformation: translate and rotate CR_VECTOR3 transl = pPos2; CR_VECTOR3 colA = arrowDir; CR_VECTOR3 tVec; if(colA.x != 0 && colA.z != 0) tVec = CR_VECTOR3(0.0f, 1.0f, 0.0); else tVec = CR_VECTOR3(0.0f, 0.0f, 1.0f); CR_VECTOR3 colB = CMathHelper::CrossVec3(colA, tVec); CR_VECTOR3 colC = CMathHelper::CrossVec3(colB, colA); CR_MATRIX4X4 transform; transform.m11 = colA.x; transform.m12 = colB.x; transform.m13 = colC.x; transform.m14 = 0.0f; transform.m21 = colA.y; transform.m22 = colB.y; transform.m23 = colC.y; transform.m24 = 0.0f; transform.m31 = colA.z; transform.m32 = colB.z; transform.m33 = colC.z; transform.m34 = 0.0f; transform.m41 = transl.x; transform.m42 = transl.y; transform.m43 = transl.z; transform.m44 = 1.0f; // transform = CMathHelper::ComposeWorldMatrix(transform, CR_VECTOR3(1.0f), CR_VECTOR3(0.0f, 90.0f, 0.0f), pPos2); // transform to worldspace p1 = CMathHelper::TransformVec3Coord(p1, transform); p2 = CMathHelper::TransformVec3Coord(p2, transform); p3 = CMathHelper::TransformVec3Coord(p3, transform); if(!AddLine(p1, p2, CR_VECTOR3(1.0f, 0.0f, 0.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddLine(p1, p3, CR_VECTOR3(1.0f, 0.0f, 0.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddCross(p2, 0.02f, CR_VECTOR3(0.0f, 0.0f, 1.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddCross(p3, 0.02f, CR_VECTOR3(0.0f, 0.0f, 1.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; return true; } Incorrect result: Aimed/ correct result (independent of arrow direction):
  9. @d07RiV thanks for digging it up Articles and papers are not very clear/ consistent on if/ when to use the sign for the bitangent. My use-case is that I provide the normal and tangent to the GPU (via vtxbuffer), and then in the VS I calculate bitangent with a simple cross: if(gPerMaterial.Material.UseNormalMap == 1) { float3 tBitangent = cross(vin.Tangent, vin.NormalL); // TBN for normal mapping vout.TangentW = mul(vin.Tangent, (float3x3)gPerObject.WorldInvTranspose); vout.BitangentW = mul(tBitangent, (float3x3)gPerObject.WorldInvTranspose); vout.NormalW = mul(vin.NormalL, (float3x3)gPerObject.WorldInvTranspose); } else vout.NormalW = mul(vin.NormalL, (float3x3)gPerObject.WorldInvTranspose); Now the question is, is this the way to go (independent on mirrored/flipped situations), or do I have to do something like this: - check if untransformed normal.z > 0 or < 0, if > sign = +, if < sign = - - apply that sign to all 3 transformed vectors? (T, B and N) or before transforming? Note that the vectors are float3's, so no W component for storing anything Note 2: this is the original vertex normal, no normapmap involved yet
  10. Thanks, this helps. I’ll go for providing normal and tangent through the vtx buffer (so calculated offline) and just calculatie bitangent in the VS with 1 cross product. I’ll do the normal mapping in world space (pass TBN in worldspace to the PS).
  11. Hi all, now that I’ve managed to get my normal mapping working, I’m at the point of making a “clean” implementation. Would you know if there’s a general consensus on the following 2 things: 1. Normal mapping in world space vs tangent space (multiply T, B and N in PS versus converting light direction to tangent space in VS, pass to PS) 2. Provide tangents and bitangents to the GPU in vtxbuffer versus calculating in the shader What I’ve learned so far is that performing it in tangentspace is preferred (convert light dir in VS and pass that to the PS), combined with calculating the bitangent (and perhaps also tangent) in the VS, per vertex. I know the answer could be profiling, but that’s not something I’m doing at this stage. Just want to decide my 1st approach. Any input is appreciated.
  12. cozzie

    Tips on 3D level collision

    I think it's a good idea to flag the objects as dynamic or not, that way you only have to check collisions for colliders which are dynamic (against the other objects). OBB shouldn't be necessary as you concluded, if the objects (cubes/walls) don't rotate (all axis aligned).
  13. cozzie

    Accepted to college but..

    Hi.That's bad news indeed )-; Is there a question hidden in there, on how people might be able to help out? (or did you just wanted to share)
  14. cozzie

    Gamma correct? freshen up

    Yep, this makes sense indeed. I even wonder if it makes sense to give a lightsource different lighting colors for ambient, diffuse and specular. Most articles/ documentation I looked up, only use 1 color for the light, which is then applied to the different material properties where the light 'hits' (ambient, diffuse, specular).
  15. cozzie

    Gamma correct? freshen up

    OK, clear. That means that every lighting and material property containing a color, used in the HLSL shader code, needs to be in linear space. So either provide sRGB source data and do the conversion (sqrt) in the shader code, or provide linear space values directly. I also did some more reading, the way the _SRGB buffer formats in DX11 work, is that you don't have to manually go back from linear space to sRGB/ gamma correct (in the PS), because DX11 does that for you. The same goes for using source color textures (DDS) in SRGB format, if the texture and view are in a xxxxSRGB format, conversion is also 'free'.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!