Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

5037 Excellent

1 Follower

About cozzie

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks, with this input and a lot of chat on the Discord GD, I almost solved it. After some addition hour(s) of debugging the last issue seemed to be the order of matrix multiplication (Basis vectors and translation). All working like a charm now. For future reference/ others, here's the fixed code: bool CDebugDraw::AddArrow(const CR_VECTOR3 &pPos1, const CR_VECTOR3 &pPos2, const CR_VECTOR3 &pColor, const bool pDepthEnabled, const bool pPersistent, const uint64_t pLifeTime) { if(!AddLine(pPos1, pPos2, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; /* p1 / | \ / | \ / | \ p2 -----|-----p3 | | | | p0 */ // form new base vectors CR_VECTOR3 e3 = pPos2 - pPos1; e3.Normalize(); CR_VECTOR3 e1 = CMathHelper::CrossVec3(e3, CR_VECTOR3(0.0f, 1.0f, 0.0f)); // how about arrow direction 0 1 0 ? e1.Normalize(); CR_VECTOR3 e2 = CMathHelper::CrossVec3(e1, e3); // fill basis vectors into our 'B' matrix (transposed to work with Mathhelper/DXmath) CR_MATRIX4X4 bMat = CR_MATRIX4X4(CR_VECTOR4(e1, 0.0f), CR_VECTOR4(e2, 0.0f), CR_VECTOR4(e3, 0.0f), CR_VECTOR4(0.0f, 0.0f, 0.0f, 1.0f)); // translation matrix CR_MATRIX4X4 transMat; transMat.m41 = pPos2.x; transMat.m42 = pPos2.y; transMat.m43 = pPos2.z; transMat.m44 = 1.0f; // multiply B and translation CR_MATRIX4X4 finalMat; finalMat = CMathHelper::MatrixMultiply(bMat, transMat); // model space arrowhead lines CR_VECTOR3 p1 = CR_VECTOR3(0.0f, 0.0f, 0.0f); CR_VECTOR3 p2 = CR_VECTOR3(-0.1f, 0.0f, -0.1f); CR_VECTOR3 p3 = CR_VECTOR3(0.1f, 0.0f, -0.1f); // transform them p1 = CMathHelper::TransformVec3Coord(p1, finalMat); p2 = CMathHelper::TransformVec3Coord(p2, finalMat); p3 = CMathHelper::TransformVec3Coord(p3, finalMat); // draw lines if(!AddLine(p1, p2, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddLine(p1, p3, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; return true; }
  2. Hi guys, I've been struggling to create a function to draw an arrow with 2 lines making up the arrowhead. The challenge is that I somehow can't get the rotation right. I've pasted the code below and an image of the result. Findings so far, when I replace the 3x3 part of the transform matrix to identity, I get the same visual result. Also when I switch colums A and C in the matrix, this specific arrow looks good (with is pointing in the positive X direction). Any input would be appreciated. bool CDebugDraw::AddArrow(const CR_VECTOR3 &pPos1, const CR_VECTOR3 &pPos2, const CR_VECTOR3 &pColor, const bool pDepthEnabled, const bool pPersistent, const uint64_t pLifeTime) { if(!AddLine(pPos1, pPos2, pColor, pDepthEnabled, pPersistent, pLifeTime)) return false; /* p1 ------- X / | \ | / | \ | / | \ | p2 -----|-----p3 | Z | | | | p0 */ CR_VECTOR3 arrowDir = pPos2 - pPos1; arrowDir.Normalize(); // model space CR_VECTOR3 p1 = CR_VECTOR3(0.0f, 0.0f, 0.0f); CR_VECTOR3 p2 = CR_VECTOR3(0.2f, 0.0f, -0.2f); CR_VECTOR3 p3 = CR_VECTOR3(-0.2f, 0.0f, -0.2f); // transformation: translate and rotate CR_VECTOR3 transl = pPos2; CR_VECTOR3 colA = arrowDir; CR_VECTOR3 tVec; if(colA.x != 0 && colA.z != 0) tVec = CR_VECTOR3(0.0f, 1.0f, 0.0); else tVec = CR_VECTOR3(0.0f, 0.0f, 1.0f); CR_VECTOR3 colB = CMathHelper::CrossVec3(colA, tVec); CR_VECTOR3 colC = CMathHelper::CrossVec3(colB, colA); CR_MATRIX4X4 transform; transform.m11 = colA.x; transform.m12 = colB.x; transform.m13 = colC.x; transform.m14 = 0.0f; transform.m21 = colA.y; transform.m22 = colB.y; transform.m23 = colC.y; transform.m24 = 0.0f; transform.m31 = colA.z; transform.m32 = colB.z; transform.m33 = colC.z; transform.m34 = 0.0f; transform.m41 = transl.x; transform.m42 = transl.y; transform.m43 = transl.z; transform.m44 = 1.0f; // transform = CMathHelper::ComposeWorldMatrix(transform, CR_VECTOR3(1.0f), CR_VECTOR3(0.0f, 90.0f, 0.0f), pPos2); // transform to worldspace p1 = CMathHelper::TransformVec3Coord(p1, transform); p2 = CMathHelper::TransformVec3Coord(p2, transform); p3 = CMathHelper::TransformVec3Coord(p3, transform); if(!AddLine(p1, p2, CR_VECTOR3(1.0f, 0.0f, 0.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddLine(p1, p3, CR_VECTOR3(1.0f, 0.0f, 0.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddCross(p2, 0.02f, CR_VECTOR3(0.0f, 0.0f, 1.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; if(!AddCross(p3, 0.02f, CR_VECTOR3(0.0f, 0.0f, 1.0f), pDepthEnabled, pPersistent, pLifeTime)) return false; return true; } Incorrect result: Aimed/ correct result (independent of arrow direction):
  3. @d07RiV thanks for digging it up Articles and papers are not very clear/ consistent on if/ when to use the sign for the bitangent. My use-case is that I provide the normal and tangent to the GPU (via vtxbuffer), and then in the VS I calculate bitangent with a simple cross: if(gPerMaterial.Material.UseNormalMap == 1) { float3 tBitangent = cross(vin.Tangent, vin.NormalL); // TBN for normal mapping vout.TangentW = mul(vin.Tangent, (float3x3)gPerObject.WorldInvTranspose); vout.BitangentW = mul(tBitangent, (float3x3)gPerObject.WorldInvTranspose); vout.NormalW = mul(vin.NormalL, (float3x3)gPerObject.WorldInvTranspose); } else vout.NormalW = mul(vin.NormalL, (float3x3)gPerObject.WorldInvTranspose); Now the question is, is this the way to go (independent on mirrored/flipped situations), or do I have to do something like this: - check if untransformed normal.z > 0 or < 0, if > sign = +, if < sign = - - apply that sign to all 3 transformed vectors? (T, B and N) or before transforming? Note that the vectors are float3's, so no W component for storing anything Note 2: this is the original vertex normal, no normapmap involved yet
  4. Thanks, this helps. I’ll go for providing normal and tangent through the vtx buffer (so calculated offline) and just calculatie bitangent in the VS with 1 cross product. I’ll do the normal mapping in world space (pass TBN in worldspace to the PS).
  5. Hi all, now that I’ve managed to get my normal mapping working, I’m at the point of making a “clean” implementation. Would you know if there’s a general consensus on the following 2 things: 1. Normal mapping in world space vs tangent space (multiply T, B and N in PS versus converting light direction to tangent space in VS, pass to PS) 2. Provide tangents and bitangents to the GPU in vtxbuffer versus calculating in the shader What I’ve learned so far is that performing it in tangentspace is preferred (convert light dir in VS and pass that to the PS), combined with calculating the bitangent (and perhaps also tangent) in the VS, per vertex. I know the answer could be profiling, but that’s not something I’m doing at this stage. Just want to decide my 1st approach. Any input is appreciated.
  6. cozzie

    Tips on 3D level collision

    I think it's a good idea to flag the objects as dynamic or not, that way you only have to check collisions for colliders which are dynamic (against the other objects). OBB shouldn't be necessary as you concluded, if the objects (cubes/walls) don't rotate (all axis aligned).
  7. cozzie

    Accepted to college but..

    Hi.That's bad news indeed )-; Is there a question hidden in there, on how people might be able to help out? (or did you just wanted to share)
  8. cozzie

    Gamma correct? freshen up

    Yep, this makes sense indeed. I even wonder if it makes sense to give a lightsource different lighting colors for ambient, diffuse and specular. Most articles/ documentation I looked up, only use 1 color for the light, which is then applied to the different material properties where the light 'hits' (ambient, diffuse, specular).
  9. cozzie

    Gamma correct? freshen up

    OK, clear. That means that every lighting and material property containing a color, used in the HLSL shader code, needs to be in linear space. So either provide sRGB source data and do the conversion (sqrt) in the shader code, or provide linear space values directly. I also did some more reading, the way the _SRGB buffer formats in DX11 work, is that you don't have to manually go back from linear space to sRGB/ gamma correct (in the PS), because DX11 does that for you. The same goes for using source color textures (DDS) in SRGB format, if the texture and view are in a xxxxSRGB format, conversion is also 'free'.
  10. Hi all, It's been a while since I've been working on my HLSL shaders, and found out I'm not 100% sure if I'm applying gamma correctness correctly. So here's what I do: - create backbuffer in this format: DXGI_FORMAT_R8G8B8A8_UNORM_SRGB - source textures (DDS) are always in SRGB format - this way the textures should be gamma correct, because DX11 helps me out here Now my question is about material and light colors. I'm not sure if I need to convert those to linear space. The colors are handpicked on screen, so I guess gamma correct. Below are 2 screenshots, the darker is including converting those colors (``return float4(linearColor.rgb * linearColor.rgb, linearColor.a);``), in the lighter shot I didn't do this conversion. These are the properties of the brick material and the light source (there are no other lightsources in the scene, also no global ambient): Material: CR_VECTOR4(0.51f, 0.26f, 0.22f, 1.0f), // ambient CR_VECTOR4(0.51f, 0.26f, 0.22f, 1.0f), // diffuse RGB + alpha CR_VECTOR4(0.51f, 0.26f, 0.22f, 4.0f)); // specular RGB + power Directional light: mDirLights[0].Ambient = CR_VECTOR4(0.1f, 0.1f, 0.1f, 1.0f); mDirLights[0].Diffuse = CR_VECTOR4(0.75f, 0.75f, 0.75f, 1.0f); mDirLights[0].Specular = CR_VECTOR4(1.0f, 1.0f, 1.0f, 16.0f); mDirLights[0].Direction = CR_VECTOR3(0.0f, 1.0f, 0.0f); So in short, should I or should I not do this conversion in the lighting calculation in the shader? (and/or what else are you seeing :)) Note that I don't do anything with the texture color, after it's fetched in the shader (no conversions), which I believe is correct.
  11. cozzie

    IDXGISwapChain::Present crashing

    Do you get a HRESULT back from the present call, or doesn't it come to that point? (perhaps a code paste to see what you're doing)
  12. For future reference/ who's interested, answer from the GD chat: - because in this case you use the same RT for 3D and 2D, it's better to prevent trying to do stuff with the same resource and get possible conflicts. Therefor option 1 is preferred (could also be 2D first, then 3D, as long as you either BeginDraw after 3D is done, or EndDraw before you draw 3D) @jpetrie: thanks!
  13. Thanks.Good to know different strategies that can be used. Do you know if the moment when you call BeginDraw/EndDraw on the D2D RT, has any influence? (As in the original post/question)
  14. cozzie

    Why are member variables called out?

    I think it's very use-case specific which ones 'win' in logics to give them a prefix/convention, so I always do both. The tricky thing when you only apply it for example to member vars OR function parameters, how would you then distinguish local vars within member functions :)
  15. cozzie

    Why are member variables called out?

    First of all, I don't think it's a nonsense topic :) In the end I think it all depends on your preference and the team's 'playing rules'/ coding conventions. Personally I'm a prefix guy, all my members are prefixed with m, e.g. mWorldPos, mWorldMat etc. And function parameters for me are always pSomething (which can also be interpreted as pointer,but in my case it isn't perse :)) For all conventions, I think if it's in your system/ head, it helps. For me seeing mSomething is directly clear that it's a member variable (Without thinking about it), same for pSomething.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!