• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Endemoniada

      Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:
       
      // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;  
      That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
      If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
      I had this working in the past but I can't find my old code
      Please help. Thank you.
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By evelyn4you
      Hello,
      in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh.
      I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes)  and "bone driven"  "corrective morphs" (= morph is dependent from a bending or twisting bone)
      But now i have no idea which is the best method to implement a brush painting system.
      Just some proposals
      a.  i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index
      b.  the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index
      c.  calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off
      d.  the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh
            but my problem is that there could be several  vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit.
      I think the given problem is quite typical an there are standard approaches that i dont know.
      Any help or tutorial are welcome
      P.S. I am working with SharpDX, DirectX11
        
    • By Luca Davidian
      Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code:   
      //Texture.cpp:         Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType)         {             //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView);             CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView);         }     // SkyBox.cpp:          void SkyBox::Draw()     {         // set cube map         ID3D11ShaderResourceView *resource = mTexture.GetResource();         Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource);              // set primitive topology         Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);              mMesh.Bind();         mMesh.Draw();     }     // Vertex Shader:     cbuffer Transform : register(b0)     {         float4x4 viewProjectionMatrix;     };          float4 main(inout float3 pos : POSITION) : SV_POSITION     {         return mul(float4(pos, 1.0f), viewProjectionMatrix);     }     // Pixel Shader:     SamplerState cubeSampler;     TextureCube cubeMap;          float4 main(in float3 pos : POSITION) : SV_TARGET     {         float4 color = cubeMap.Sample(cubeSampler, pos.xyz);         return color;     } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
    • By B. /
      Hi Guys,
      i want to draw shadows of a direction light but the shadows always disappear, if i translate my mesh (cube) in the world to far of the bounds of my orthographic projection matrix.
      That my code (Based of an XNA sample i recode for my project):
      // Matrix with that will rotate in points the direction of the light Matrix lightRotation = Matrix.LookAtLH(Vector3.Zero, lightDir, Vector3.Up); BoundingFrustum cameraFrustum = new BoundingFrustum(Matrix.Identity); // Get the corners of the frustum Vector3[] frustumCorners = cameraFrustum.GetCorners(); // Transform the positions of the corners into the direction of the light for (int i = 0; i < frustumCorners.Length; i++) frustumCorners[i] = Vector4F.ToVector3(Vector3.Transform(frustumCorners[i], lightRotation)); // Find the smallest box around the points BoundingBox lightBox = BoundingBox.FromPoints(frustumCorners); Vector3 boxSize = lightBox.Maximum - lightBox.Minimum; Vector3 halfBoxSize = boxSize * 0.5f; // The position of the light should be in the center of the back pannel of the box. Vector3 lightPosition = lightBox.Minimum + halfBoxSize; lightPosition.Z = lightBox.Minimum.Z; // We need the position back in world coordinates so we transform // the light position by the inverse of the lights rotation lightPosition = Vector4F.ToVector3(Vector3.Transform(lightPosition, Matrix.Invert(lightRotation))); // Create the view matrix for the light this.view = Matrix.LookAtLH(lightPosition, lightPosition + lightDir, Vector3.Up); // Create the projection matrix for the light // The projection is orthographic since we are using a directional light int amount = 25; this.projection = Matrix.OrthoOffCenterLH(boxSize.X - amount, boxSize.X + amount, boxSize.Y + amount, boxSize.Y - amount, -boxSize.Z - amount, boxSize.Z + amount); I believe the bug is by cameraFrustum to set a Matrix Idetity. I also tried with a Translation Matrix of my Camera Position and also the View Matrix of my Camera, but without success
      Can anyone tell me, how to draw shadows of my direction light always where my camera is current in my scene?
      Greets
      Benjamin
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 Fastest way to draw Quads?

This topic is 777 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am working on a DX11 2D engine. What is the fastest way to draw lots of Quads? Currently I set an immutable vertex buffer with quad geometry during initialization, and then I draw the quads using instancing. However, there are multiple ways to do this: - trianglelist (6 vertices in immutable vertex buffer) or trianglestrip (4 vertices in immutable vertex buffer) - using indices or not using indices What to choose? It seems to me that trianglestrips and indices have more to do with memory / bandwidth optimization for large or dynamic models. So couldnt they actually hurt raw rendering performance of simple quad instances?

Share this post


Link to post
Share on other sites
Advertisement

Also consider that for a 2D engine, performance of drawing quads may not be that important.  So long as you get some reasonable batching going, you're probably more likely to be bottlenecked on fillrate and ROP than on vertices or draw calls.

Share this post


Link to post
Share on other sites
Profiling would normally be good, in this case I don't know if performance will be an option at all.
You could use a geometry shader taking a single vertex and outputting a quad (basically point sprites like mentioned above).

Share this post


Link to post
Share on other sites

Another approach is not to bind a vertex buffer, and use an instanced triangle strip. You then draw n instances of four non-existent verts, and use SV_VertexID to position them. However, maybe someone could shed light on if the overhead of using instancing still applies in this case? Otherwise, the approach described in that linked presentation (no vertex buffer, but drawing and indexed triangle list) would still be better, if a bit more complex.

Edited by Oetker

Share this post


Link to post
Share on other sites

It's covered in Matias' link, but for reinforcement - don't use instancing for small meshes. There's a decent performance penalty when the instanced mesh is less than about 500 verts.

 

Is there a rationale for the performance penalty encountered for small mesh vs drawIndexed method by the way?

As far as I know modern (desktop ?) gpu doesn't have a fixed vertex attribute fetch function anymore and use general buffer read method under the hood.
So the only difference I see between Bilodeau's method and the drawInstanced method from hardware point of view is that the instanced call provides an extra SV_InstanceID input.

 

Since it looks like it's faster to compute an instance id and a vertex id from a single value using modulo and divide operation than to use hardware provided SV_InstanceID is there any reason to use instancing at all ? The only reason I can see is that "coupled" vertexId/instanceID may require usage of 32 bits indexes instead of 16 bits ones.

Share this post


Link to post
Share on other sites

Is there a rationale for the performance penalty encountered for small mesh vs drawIndexed method by the way?

GCN's wavefront size is 64.
That means GCN works on 64 vertices at a time.
If you make two DrawPrimitive calls of 32 vertices each, GCN will process them using 2 wavefronts, wasting half of its processing power.

It's actually a bit more complex, as GCN has compute units, and each CU has 4 SIMD units. Each SIMD unit can execute between 1 and 10 wavefronts. There's also some fixed function parts like the rasterizer which may have some overhead when involving small meshes.

Long story short, it's all about load balancing, and small meshes leave a lot of idle space; hence the sweetspot is around 128-256 vertices for NVIDIA, and around 500-600 vertices for AMD (based on benchmarks).

Share this post


Link to post
Share on other sites

Wouldn't that apply to non-instanced draw calls as well?

Yes indeed. Note I said two DrawPrimitive calls. Not two instances in one instanced DrawPrimitive call.
Old hardware (GeForce 6000 & 7000) had an overhead when using instancing with small instance counts that was not present with non-instanced draw calls. But they worked quite differently. Edited by Matias Goldberg

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement