Jump to content
  • Advertisement

Obliique

Member
  • Content Count

    26
  • Joined

  • Last visited

Everything posted by Obliique

  1. The original form of your camera matrix in world space would be a result of a series of transformations, typically a Rotation R followed by Translation T . so to get world space matrix, we'd need to do W = RT . But what we want is the inverse of this transform so that every object is multiplied by it which allows us to use the camera as reference coordinate system making every object coordinates relative to camera space. to compute inverse, We can simply go ahead and compute the inverse the usual way but this won't be a good idea as it is very costly. What you'd probably want to do is use a computation that is cheaper. The easier way to go around this is decomposing R and T from the world matrix and computing inverse on R and T individually using a cheaper method. For the rotation R, We know the camera basis vectors are orthonormal, this allows us to get inverse by simply transposing the camera basis vectors so that we have the form RT which gives us: RT = | Ux Vx Wx 0 | | Uy Vy Wy 0 | | Uz Vz Wz 0 | | 0 0 0 1 | where U, V and W are transposed camera basis vectors derived from the original world matrix: R = | Ux Uy Uz 0 | | Vx Vy Vz 0 | | Wx Wy Wz 0 | | 0 0 0 1 | To get the inverse of T which is a translation, we need to negate the translation potion so that we have the form T-1 : T-1 = | 1 0 0 0 | | 0 1 0 0 | | 0 0 1 0 | | -Tx -Ty -Tz 1 | derived from T: T = | 1 0 0 0 | | 0 1 0 0 | | 0 0 1 0 | | Tx Ty Tz 1 | Since we have computed the inverses the easy way/ we can multiply T-1RT to give us view space. note that when you multiply this . you end up with the scenario you just stated to get our forth row. that is when you are doing matrix multiplication in the forth row of T-1 by RT you are simply doing a dot product of the forth row with the basis From transposed rotation matrix. the result view camera matrix should be: T-1RT = | Ux Vx Wx 0 | | Uy Vy Wy 0 | | Uz Vz Wz 0 | | -Tdot U -TdotV -Tdot W 1 |
  2. Hi. I have been programming dx12 for nearly 6 months now and I think I still have a misuderstanding on swapchain flags and how they affect presentation. please correct me if I am wrong. My understanding is the following: - DX12 only supports two swap effect flags with the flip model. ie DXGI_SWAP_EFFECT_FLIP_DISCARD and DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL . My understanding is that both these flags don't need a redirection surface hence the contents of backbuffers are displayed to the screen directly from app. The DXGI_SWAP_EFFECT_FLIP_DISCARD flag allows for an option were if the presentation queue is full and the call to IDXGISwapChain::Present() is made, whatever is at the end of this queue is discarded without ever making it to the screen, is this correct? The DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL inserts the frame to be presented at the end of the queue. does this mean that the queue can only contain one buffer at a time? - Both DXGI_SWAP_EFFECT_FLIP_DISCARD and DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL dont support multisampling. So i have had to set my sample count to 1 and sample quality to 0 for my swapchain desc structure. My question is how would we add support for multisampling like 4xMSAA if these are the only flags supported in dx12. I have seen some usages were these were set to sample count > 1 and quality level queried which leaves me confused. I still have'nt tested multisampling as I don't use them in my experimental engine. - The waitable Swapchain options blocks present thread from the calling application until the specified time to wait on elapses. But why would we explicilly specify wait time on swapchain? - tearing support is included by the GPU vendor. So this allows for options like freesync and gsync to be utilized. I am using an intel gpu and I don't really know how to test this. - It isn't a requirement for apps to toggle vsync-on on a windowed app. This is also confusing. Won't screen tearing happen anyway if my app is not synchronized with my next screen vertical blank?
  3. Obliique

    SwapChain in DX12

    Thank you so much for taking the time to respond . This has cleared things up for me . I read carefully and will consider these options when I refactor my code.
  4. Obliique

    Memory Leak Detection and DX12

    I am not sure if that carries over to DirectX since it's built around COM . The best I can think of as mentioned already is enabling the debug layer and making a call to ReportLiveObjects() after you've released all dx objects to see if you have live objects .
  5. Hi all, I was wondering if DXR has a software implementation like WARP . I ask this because I am not using NV hardware. Is it possible to get an app running using DXR on hardware that isn't nvidia on microsoft's software drivers?
  6. Hi, I am wondering.. If I have something like a cube which has 8 vertices which are referenced through an index buffer, Is there a way I would go about assigning unique vertex normals to each vertex which I figure are 24... From my current knowledge I think I would need about 24 normals assigning 4 identical normals to each face for the lighting to work correctly , for this to work I would need 24 vertices which eliminates the need for an index buffer. I figured vertex averging was working wrongly here because of very sharp edges. Is it possible to still use normals on cube geometry while using an index buffer such that my vertex count remains 8 or the only way this goes is by using a non indexed geometry with just regular DrawInstanced (dx12) ?
  7. I think I will read this properly when I settle down 😄 as it's slightly overwhelming. I am actually trying read Obj files into my application. And one file I exported has more vertex normals than there are vertices which was surprising to me because I have yet to know how these are grouped together. I don't have knowledge of smoothing groups but I have used something similar when I used to use 3ds max 🙂 I will look through this, thanks!
  8. Thanks for the helpful response again pcmaster. Would you know a better way on how to resolve complicated meshes which would have both smooth and sharp edges? Would I need to do away with indices to be safe? Or should I detect the angles somehow and somehow use index buffer?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!