• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Endemoniada

      Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:
       
      // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;  
      That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
      If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
      I had this working in the past but I can't find my old code
      Please help. Thank you.
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By evelyn4you
      Hello,
      in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh.
      I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes)  and "bone driven"  "corrective morphs" (= morph is dependent from a bending or twisting bone)
      But now i have no idea which is the best method to implement a brush painting system.
      Just some proposals
      a.  i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index
      b.  the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index
      c.  calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off
      d.  the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh
            but my problem is that there could be several  vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit.
      I think the given problem is quite typical an there are standard approaches that i dont know.
      Any help or tutorial are welcome
      P.S. I am working with SharpDX, DirectX11
        
    • By Luca Davidian
      Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code:   
      //Texture.cpp:         Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType)         {             //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView);             CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView);         }     // SkyBox.cpp:          void SkyBox::Draw()     {         // set cube map         ID3D11ShaderResourceView *resource = mTexture.GetResource();         Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource);              // set primitive topology         Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);              mMesh.Bind();         mMesh.Draw();     }     // Vertex Shader:     cbuffer Transform : register(b0)     {         float4x4 viewProjectionMatrix;     };          float4 main(inout float3 pos : POSITION) : SV_POSITION     {         return mul(float4(pos, 1.0f), viewProjectionMatrix);     }     // Pixel Shader:     SamplerState cubeSampler;     TextureCube cubeMap;          float4 main(in float3 pos : POSITION) : SV_TARGET     {         float4 color = cubeMap.Sample(cubeSampler, pos.xyz);         return color;     } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
    • By B. /
      Hi Guys,
      i want to draw shadows of a direction light but the shadows always disappear, if i translate my mesh (cube) in the world to far of the bounds of my orthographic projection matrix.
      That my code (Based of an XNA sample i recode for my project):
      // Matrix with that will rotate in points the direction of the light Matrix lightRotation = Matrix.LookAtLH(Vector3.Zero, lightDir, Vector3.Up); BoundingFrustum cameraFrustum = new BoundingFrustum(Matrix.Identity); // Get the corners of the frustum Vector3[] frustumCorners = cameraFrustum.GetCorners(); // Transform the positions of the corners into the direction of the light for (int i = 0; i < frustumCorners.Length; i++) frustumCorners[i] = Vector4F.ToVector3(Vector3.Transform(frustumCorners[i], lightRotation)); // Find the smallest box around the points BoundingBox lightBox = BoundingBox.FromPoints(frustumCorners); Vector3 boxSize = lightBox.Maximum - lightBox.Minimum; Vector3 halfBoxSize = boxSize * 0.5f; // The position of the light should be in the center of the back pannel of the box. Vector3 lightPosition = lightBox.Minimum + halfBoxSize; lightPosition.Z = lightBox.Minimum.Z; // We need the position back in world coordinates so we transform // the light position by the inverse of the lights rotation lightPosition = Vector4F.ToVector3(Vector3.Transform(lightPosition, Matrix.Invert(lightRotation))); // Create the view matrix for the light this.view = Matrix.LookAtLH(lightPosition, lightPosition + lightDir, Vector3.Up); // Create the projection matrix for the light // The projection is orthographic since we are using a directional light int amount = 25; this.projection = Matrix.OrthoOffCenterLH(boxSize.X - amount, boxSize.X + amount, boxSize.Y + amount, boxSize.Y - amount, -boxSize.Z - amount, boxSize.Z + amount); I believe the bug is by cameraFrustum to set a Matrix Idetity. I also tried with a Translation Matrix of my Camera Position and also the View Matrix of my Camera, but without success
      Can anyone tell me, how to draw shadows of my direction light always where my camera is current in my scene?
      Greets
      Benjamin
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 [SharpDX] Questions about DirectX11

This topic is 1959 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been trying to convert a dx9 project of mine to dx11, and there are some concepts that are still confusing to me.

All of the samples I have seen use texture arrays for multiple textures, and they use PixelShader.SetShaderResource(0, myTextureView) to set the variously named texture global variable in the shader, let's say its called 'picture' or whatever. Since there is only one texture resource in the shader in these samples, there is no ambiguity (I guess), but what if you do need multiple texture resources? Will it organize them in order by slot number?

For example, I have a simple water plane shader in my dx9 project that, instead of using fog in the distance, it blends with the same cubemap that I use for the sky box. So in this shader I need a texture array for the cubemap, plus another texture for the water ripples. So, if in my shader I have the globals:
[source lang="plain"]Texture2D skyTexture
Texture2D waterTexture[/source]
Will the texture parameters be filled in the order I call the SetShaderResource function? In slot order? How does it know which texture view to put into which variable?

I'm still a little confused on just what a 'slot' is. It's kindof like the different vertex streams in dx9 but not really. There's a limited number of slots so there's a limited number of each type of resource right? I'm sure I don't get it.

Share this post


Link to post
Share on other sites
Advertisement
You have to specify slots in the shader, that's how it'll know; ex.:

SamplerState sampler0 : register(s0);
Texture2D texNormal : register(t0);
Texture2D texDepth : register(t1);

Share this post


Link to post
Share on other sites
You don't have to specify registers in the shader. It can be convenient to do so if you want to be explicit, but it's not required. If you don't assign one yourself, the compiler will bind all used resources to a register based on the resource type. There are 16 "s" registers for samplers, 128 "t" registers for shader resource views, 16 "b" registers for constant buffers, and 8 "u" registers for unordered access viewers. These registers then correspond to the "slots" that you specify to API calls like PSSetShaderResources. So if a texture is bound to register t2, then in your app code you'll want to bind your shader resource view to slot 2.

If you're going to hard-code the slots in your app code, then you'll want probably want to explicitly specify the register in your shader code. Otherwise you might run into the cases where the compiler doesn't allocate the register that you think it will, often because a resource doesn't end up getting used in the shader and therefore gets optimized out entirely.

Share this post


Link to post
Share on other sites
Also, in case you want to access by the name of the variable in your shader and get the slot associated with this name, you can query these slots using D3DReflect (via the ID3D11ShaderReflection or ShaderReflection in SharpDX), and shaderReflection.GetResourceBindingDesc() methods. This is used by legacy D3DX Effect systems. Note that the reflection API is not certified API on the Windows Store App so if you really need this access at runtime, you will have to store them along the bytecode of your shader.

While it can be practical to hardcode these specifics register slots in the original shader, you have less opportunity after this to combine includes/shaders (as you could have some conflicts: for example if you declare in a include TextureA.h a TextureA mapped to register t0, and in TextureB.h a TextureB mapped to register t0, the HLSL compiler will failed at compiling a shader that is using these two variables for the same stage). Though assigning a specific register can be handy in some cases where you want a specific resources to be accessible at a specific slot (for example a particular constant buffer used across all your shaders... etc.), while the others variable could be assigned automatically to the available registers. Edited by xoofx

Share this post


Link to post
Share on other sites
Thanks everyone. So in the SharpDX case of PixelShader.SetShaderResource(0, myTextureView), the function specifically requires a slot number.

If the effects framework is legacy, I probably don't want to become dependent on it.

So let me get this straight. Let's say I set 3 constant buffers with these API calls:

[source lang="csharp"]
PixelShader.SetConstantBuffer(0,cb0);
PixelShader.SetConstantBuffer(1,cb1);
PixelShader.SetConstantBuffer(2,cb2);
[/source]

If I don't specify registers in the shader code, and cb1 is not used and optimized away, then cb2 would end up in register b1? I still don't understand how its possible to know which buffer goes into which global variable, especially if they happen to be the same size. Perhaps its the order that the names appear in the shader file? Edited by cephalo

Share this post


Link to post
Share on other sites
If you set a constant buffer into slot 2, it gets bound to register 2. No exceptions.

What we were talking about was how the compiler assigns registers to resources declared in your shader code. In general the compiler will assign in order of declaration, but this isn't really something you can rely on. If you're not going to assign registers in your code, then you probably want to use the reflection interfaces to query the register based on the resource name and type.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement