• Content count

  • Joined

  • Last visited

Community Reputation

271 Neutral

About KarimIO

  • Rank

Personal Information

  • Interests
  1. @Hodgman Got any idea for the question above?
  2. Okay thank you a lot, Hodgman I finally got it to work! But I do have a question, my main vertex.hlsl which takes the actual geometry and pushes it into the gbuffer requires row-major whereas the rest works fine using column major. Do you have any idea why that could be?
  3. I've tried this, but now it's far too zoomed in. Originally, it did look quite like my OpenGL results. Is there a reason for this? projection *= glm::translate(glm::vec3(0.0f,0.0f,0.5f)) * glm::scale(glm::vec3(1.0f,1.0f,0.5f));
  4. Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection EDIT: I've switched to row major vertices in DirectX using the following: #pragma pack_matrix( row_major ) I guess DirectX just uses row major by default. I'm still having the same issues though. I tried using the following in ViewPosFromDepth: float z = depth * 0.5 + 0.5;
  5. I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user. So basically I get my GLM matrices: struct DefferedUBO { glm::mat4 view; glm::mat4 invProj; glm::vec4 eyePos; glm::vec4 resolution; }; DefferedUBO deffUBOBuffer; // ... glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f); // Get My Camera CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]]; // Get the View Matrix glm::mat4 view = glm::lookAt( transform->GetPosition(), transform->GetPosition() + transform->GetForward(), transform->GetUp() ); deffUBOBuffer.invProj = glm::inverse(projection); deffUBOBuffer.view = glm::inverse(view); if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) { deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj); deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view); } // Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it. deffUBO->UpdateUniformBuffer(&deffUBOBuffer); deffUBO->Bind()); Then in HLSL, I simply use the following: cbuffer MatrixInfoType { matrix invView; matrix invProj; float4 eyePos; float4 resolution; }; float4 ViewPosFromDepth(float depth, float2 TexCoord) { float z = depth; // * 2.0 - 1.0; float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0); float4 viewSpacePosition = mul(invProj, clipSpacePosition); viewSpacePosition /= viewSpacePosition.w; return viewSpacePosition; } float3 WorldPosFromViewPos(float4 view) { float4 worldSpacePosition = mul(invView, view); return worldSpacePosition.xyz; } float3 WorldPosFromDepth(float depth, float2 TexCoord) { return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord)); } // ... // Sample the hardware depth buffer. float depth = shaderTexture[3].Sample(SampleType[0], input.texCoord).r; float3 position = WorldPosFromDepth(depth, input.texCoord).rgb; Here's the result: This just looks like random colors multiplied with the depth. Ironically when I remove transposing, I get something closer to the truth, but not quite: You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why. The correct version, along with Albedo, Specular, and Normals.
  6. Another bump. Still no solution. Also, it works perfectly on Linux.
  7. @Hodgman @TheChubu Sorry but buuump. Any ideas, guys?
  8. DX11 trying with ssao

    Are you sure all the inputs are in the correct space?
  9. Yeah, the debug output. I only get info and one low warning which is just giving me buffer sizes I think. The latter shows up every frame. Like I said, glbindtexture but only when used with a framebuffer texture. I've checked the creation of it a hundred times over and don't think there's any problems.
  10. Like I said the only issue comes from glbindtexture of a framebuffer texture. Regular textures work fine. Bitting works fine so I know it's not an issue of populating the framebuffer asynchronously. I used breakpoints to figure out the timing and what causes the issue. Keep in mind vsync is on so that'd why there's not much work to be done. The scene is a simple crytek sponza with no lighting yet (it's enabled on my Intel but I disabled it for now) so there's not many commands. I haven't multithreaded anything yet so it shouldn't matter if it's idle. This happens no matter how many times I restart so not an issue of Nvidia working on something else. The rest of the frame takes 12ms due to vsync. Also I had all this and so much more running before I improved the rendering architecture (I wrote my own parser and exporter for faster loading, redesigned the rendering wrappers to make vulkan work better, and I made everything draw based on shader and material first rather than object) Edit: Oh and thank you so much for helping me so far!
  11. 3D Help! How to make "Fog of War"

    It largely depends on how your game works. Does it use a grid system like Age of Empires (it uses a very fine grid IIRC), maybe a hex system like Civilization, or is it all freely positioned? For the grid-based solution, I think you can cull all objects in hidden grid sections. Then simply draw a hex / grid shaped tile on it. Otherwise, I think you'll need to render perhaps based on distance fields, or have a texture that is black originally, and you render circles near buildings and units to "remove the fog". Then overlay this on the terrain to hide it. I'm not a strategy game developer but this is my best advice.
  12. DX11 3D Model Coding Question - DX11

    I'm not sure about DX but I think it's pretty much the same as in OpenGL. You upload the bone weights and IDs per vertex. Then upload the matrices of each bone in a uniform buffer (constant buffer in DX). These matrices can be given by interpolating between the matrices in the animation keyframes, based on the time. There are plenty of tutorials online if you need in depth info.
  13. You could use AssImp and then output to your own proprietary format.
  14. Okay I'll try figure it out tomorrow, as I've never used it before and it looks super confusing.
  15. Sorry guys, but bump. I still can't find an answer, nor anyone with this issue.