• Advertisement

ChuckNovice

Member
  • Content count

    9
  • Joined

  • Last visited

Community Reputation

105 Neutral

About ChuckNovice

  • Rank
    Newbie

Personal Information

  • Interests
    Programming
  1. Good work, I've seen your few topics about this engine in the past and it helped me taking decisions on how to build my own abstraction layer in my project. It's not only a nice project but also a good source of information.
  2. I wouldn't be surprised that the 960M doesn't support that 11.3 feature. Conservative rasterization is still "recent". Have you checked if your card support it with "CheckFeatureSupport"?: https://msdn.microsoft.com/en-us/library/windows/desktop/dn770364(v=vs.85).aspx
  3. Hello zmic, I have absolutely no experience in doing what you are trying to do (copy data directly to swap chain buffer) but I just noticed one thing in your last comment if I read what clemensx said. I'm not sure if clemensx meant that formats have to be the exact same or only the bytes alignment of the formats. If the formats really have to be exactly the same, are you sure that the buffer of your swap chain is not actually B8G8R8A8_UNORM instead of R8G8B8A8_UNORM? I've seen swap chains using the B8G8R8A8_UNORM, R11G11B10_FLOAT and R16G16B16A16_FLOAT formats but never R8G8B8A8_UNORM like common textures. Also I've had to do the same thing as you in the past but I've done it with an extra draw call and a quad covering the whole buffer after reading everywhere that it was normal to have to do such thing.
  4. DX11 Shadow Map Details

    It is hard to tell what you got as a result. Could you modify your shader to output the actual texture data instead of those frac() operations. At the end your depth buffer is supposed to look something like this (the depth value is simply a shade of red) : You can also use the visual studio graphic debugger tool to visualize the textures that you loaded so you don't need to write a shader and tons of code just to visualize your result. Also be aware that since you used a perspective projection you are actually running a spot light and not a directional light.
  5. Animating characters on sloping ground

    I don't think there is any "standard" way and as far as I know stuff can get very fancy there. Take for example HumanIK which dynamically adjust the model and has been used in many AAA games :
  6. DX11 Shadow Map Details

    Hello Bartosz, I am not a DX god but at the end this is how I understood it. You are wondering what is the position of the light because you want to position the camera correctly for the depth pass right? In my case I had to either assume the bounds of my scene or pre-calculate it. Knowing the bounds of my scene i was able to assume a position for my light that would make sure no 3D object are behind the projection. Knowing the bounds we are also able to calculate the minimum range of near/far values for our projection to keep an optimal depth precision while having all the scene objects inside the projection. With those bounds you can also figure out the left/top/right/bottom of your otho projection to cover the whole scene. Of course it requires some math to do. Your lookAt can safely be the Position that you gave to the light minus the light direction. LookAt is always a position where your camera is pointing to relative to it's current position. Also, as mentioned above, an orthographic projection should be used for directional lights as it is supposed to mimic a light that is so far away that all the rays appear to go in the same direction. The orthographic project does exactly that. First of all I've never heard of a 16 bits depth format. I either use DXGI_FORMAT_R32_FLOAT for full depth precision or DXGI_FORMAT_R24_UNORM_X8_TYPELESS in case I am using depth + stencil. See this link : https://msdn.microsoft.com/en-us/library/windows/desktop/ff476464(v=vs.85).aspx When you set your render target you specify both your render target AND the depth buffer to use. So your render target may be set to a R8G8B8A8_UNORM texture but your depth buffer must also be set to your R16_UNORM texture. The depth will be written to the depth buffer while your pixel shader will write colors to the R8G8B8A8_UNORM texture. Those are two completely different texture. It is possible to ommit the render target and specify only a depth buffer in case you only want to do a depth pass with vertex shader which is what you should probably do when generating the depth map of your shadows.
  7. Class linkage system with DX12 ?

    I will look into that, thank you for the precious time.
  8. Class linkage system with DX12 ?

    Hello galop1n, Thank you for the answer I will definitely keep those new 6.1 feature in mind. Is there a link that reference what you are talking about? I don't find much about shader model 6.1 other than all those Wave...() functions that were added. Also am I right to think that such library system would work this way? - Shader can contain a big library of function and be compiled without an entry point - The bytecode of that function library can later be linked to other shaders to avoid re-compiling the library everytime from #include directives.
  9. Hello, I am fairly experienced with DX11 and recently gave myself the challenge to keep myself up to date with graphic APIs so I am learning DX12 in parallel. After compiling one of my old DX11 shader that was using interface / classes as a hack to fake function pointers I got a nice message in the ouput saying that DX12 doesn't support interfaces in shaders. Am I right to assume that all this class linkage system was pure candy from the DX11 drivers and that it was using multiple root signatures behind the scene to achieve this? I want to be sure before I switch everything around and start handling all that stuff manually. This is the interfaces / classes that I am talking about : https://msdn.microsoft.com/en-us/library/windows/desktop/ff471421(v=vs.85).aspx Thanks for your time
  • Advertisement