• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Endemoniada

      Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:
       
      // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;  
      That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
      If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
      I had this working in the past but I can't find my old code
      Please help. Thank you.
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By evelyn4you
      Hello,
      in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh.
      I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes)  and "bone driven"  "corrective morphs" (= morph is dependent from a bending or twisting bone)
      But now i have no idea which is the best method to implement a brush painting system.
      Just some proposals
      a.  i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index
      b.  the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index
      c.  calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off
      d.  the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh
            but my problem is that there could be several  vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit.
      I think the given problem is quite typical an there are standard approaches that i dont know.
      Any help or tutorial are welcome
      P.S. I am working with SharpDX, DirectX11
        
    • By Luca Davidian
      Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code:   
      //Texture.cpp:         Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType)         {             //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView);             CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView);         }     // SkyBox.cpp:          void SkyBox::Draw()     {         // set cube map         ID3D11ShaderResourceView *resource = mTexture.GetResource();         Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource);              // set primitive topology         Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);              mMesh.Bind();         mMesh.Draw();     }     // Vertex Shader:     cbuffer Transform : register(b0)     {         float4x4 viewProjectionMatrix;     };          float4 main(inout float3 pos : POSITION) : SV_POSITION     {         return mul(float4(pos, 1.0f), viewProjectionMatrix);     }     // Pixel Shader:     SamplerState cubeSampler;     TextureCube cubeMap;          float4 main(in float3 pos : POSITION) : SV_TARGET     {         float4 color = cubeMap.Sample(cubeSampler, pos.xyz);         return color;     } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
    • By B. /
      Hi Guys,
      i want to draw shadows of a direction light but the shadows always disappear, if i translate my mesh (cube) in the world to far of the bounds of my orthographic projection matrix.
      That my code (Based of an XNA sample i recode for my project):
      // Matrix with that will rotate in points the direction of the light Matrix lightRotation = Matrix.LookAtLH(Vector3.Zero, lightDir, Vector3.Up); BoundingFrustum cameraFrustum = new BoundingFrustum(Matrix.Identity); // Get the corners of the frustum Vector3[] frustumCorners = cameraFrustum.GetCorners(); // Transform the positions of the corners into the direction of the light for (int i = 0; i < frustumCorners.Length; i++) frustumCorners[i] = Vector4F.ToVector3(Vector3.Transform(frustumCorners[i], lightRotation)); // Find the smallest box around the points BoundingBox lightBox = BoundingBox.FromPoints(frustumCorners); Vector3 boxSize = lightBox.Maximum - lightBox.Minimum; Vector3 halfBoxSize = boxSize * 0.5f; // The position of the light should be in the center of the back pannel of the box. Vector3 lightPosition = lightBox.Minimum + halfBoxSize; lightPosition.Z = lightBox.Minimum.Z; // We need the position back in world coordinates so we transform // the light position by the inverse of the lights rotation lightPosition = Vector4F.ToVector3(Vector3.Transform(lightPosition, Matrix.Invert(lightRotation))); // Create the view matrix for the light this.view = Matrix.LookAtLH(lightPosition, lightPosition + lightDir, Vector3.Up); // Create the projection matrix for the light // The projection is orthographic since we are using a directional light int amount = 25; this.projection = Matrix.OrthoOffCenterLH(boxSize.X - amount, boxSize.X + amount, boxSize.Y + amount, boxSize.Y - amount, -boxSize.Z - amount, boxSize.Z + amount); I believe the bug is by cameraFrustum to set a Matrix Idetity. I also tried with a Translation Matrix of my Camera Position and also the View Matrix of my Camera, but without success
      Can anyone tell me, how to draw shadows of my direction light always where my camera is current in my scene?
      Greets
      Benjamin
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 How to generate mipmap for a texture

This topic is 1848 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi all

 

Here is my case:

An empty 2d texture is created and then it is filled with some memory without mipmap information.

 

I want to generate mip mapping for the texture.

It's dx11. I see there is an interface called "GenerateMips" could do some sort of job.

But "GenerateMips" requires the "D3D11_RESOURCE_MISC_GENERATE_MIPS" , which is only available if the texture is both a render target and shader resource.

My texture is just used for texture mapping , it's not a render target of course.

 

So how can I generate mip map using DX11 interface.
( I know I can generate mipmap using cpu code , but I just want to know the professional way of doing it.)

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement

So how can I generate mip map using DX11 interface.
( I know I can generate mipmap using cpu code , but I just want to know the professional way of doing it.)

 

if the dx11 API doesn't provide that functionality, you'll have to do it yourself. actually a great opportunity to implement mipmaps that don't blend the background into the edge of sprite textures. odds are either way (your code or DX11 code) the work will be done by the cpu, not gpu.

Share this post


Link to post
Share on other sites

- If you have photoshop you can generate the mipmap chain using the nvidia dds plugin:

  https://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop

- For mipmap generation, you could integrate nvidia texture tools into your pipeline: http://code.google.com/p/nvidia-texture-tools/

- If you are looking for a general "howto" on generating and loading the chain, I would suggest checking out the replacement code for D3DX load texture:

  (This uses WIC (Windows Imaging Components) and I would not touch it with a 10 ft pole, however, it does show you the steps needed to load the mipmap chain etc...).

  http://msdn.microsoft.com/en-us/library/windows/apps/jj651550.aspx

- I think MJP's framework supports this, along with hieroglyph if you feel like diving through their code:

  (I think hieroglyph integrated the texture code from the Microsoft example when d3dx was phased out).

  http://mynameismjp.wordpress.com/

  http://hieroglyph3.codeplex.com/

Edited by sensate

Share this post


Link to post
Share on other sites

- I think MJP's framework supports this, along with hieroglyph if you feel like diving through their code:

(I think hieroglyph integrated the texture code from the Microsoft example when d3dx was phased out).

http://mynameismjp.wordpress.com/

http://hieroglyph3.codeplex.com/

Hieroglyph uses DirectXTK now that D3DX is deprecated, which is the same WIC method that you mentioned above.  Actually there is an alternative to WIC available in DirectXTK too, so you should take a look and see if it will work for you.

 

But the OP was specifically taking about generating the texture at runtime and then producing the mip-maps.  @jerrycao_1985: Have you tried creating your texture with the render target usage flag, even if you won't use it for a render target?  It would be worth a try to see if it allows you to generate your mips that way.  Otherwise, what the others have said is right - you will probably have to generate the mips on the CPU...

Share this post


Link to post
Share on other sites

Yes.

- If he is targeting Direct3d11 with non Direct3d11 conformant hardware he will have to generate the pyramid on the cpu. Which is why I suggested looking at the nvidia texture tools library.

- If he is targeting Direct3d11 with Direct3d11 conformant hardware he can use compute shaders (and apply compression as needed):

http://code.msdn.microsoft.com/windowsdesktop/BC6HBC7-DirectCompute-35e8884a

 

I have not tried applying rendertarget flags to a texture that I'm uploading data to from a staging resource before (never had the need).

I would be intrigued to hear if this works. If it does, this is of course the simplest line to follow, however you thenmiss out on writing a bunch of code that will be useful to you for as long as you continue to render (as long as you don't write it using DirectCompute).

Edited by sensate

Share this post


Link to post
Share on other sites

I would be intrigued to hear if this works. If it does, this is of course the simplest line to follow, however you thenmiss out on writing a bunch of code that will be useful to you for as long as you continue to render (as long as you don't write it using DirectCompute).

Maybe as a learning exercise, but I haven't ever had the need to generate my own mip-maps...  Can you think of other cases where this would be beneficial code to write?  Seems like a good thing to use a library for, but that is only my opinion.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement