HD86

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

110 Neutral

3 Followers

About HD86

  • Rank
    Member

Personal Information

  • Interests
    Programming
    QA
  1. I have a vertex buffer on a default heap. I need a CPU pointer to that buffer in order to loop through the vertices and change one value in some vertices (the color value). In the past this was possible by creating the buffer with the flag D3DUSAGE_DYNAMIC/D3D11_USAGE_DYNAMIC and using IDirect3DVertexBuffer9::Lock or ID3D11DeviceContext::Map to get a pointer. What is the correct way to do the same in DX 12? As far as I understand, the method ID3D12Resource::Map cannot be used on a default heap because default heaps cannot be accessed directly from the CPU. The documentation says that upload heaps are intended for CPU-write-once, GPU-read-once usage, so I don't think these are equivalent to the "dynamic" buffers. Is the readback heap equivalent to what was called a dynamic buffer? Or should I create a custom heap? I am thinking to do the following: -Create a temporary readback heap. -Copy the data from the default heap to the readback heap using UpdateSubresources. -Get a CPU pointer to the readback heap using Map and edit the data. -Copy the data back to the default heap using UpdateSubresources. What do you think about this?
  2. As far as I know, the size of XMMATRIX must be 64 bytes, which is way too big to be returned by a function. However, DirectXMath functions do return this struct. I suppose this has something to do with the SIMD optimization. Should I return this huge struct from my own functions or should I pass it by a reference or pointer? This question will look silly to you if you know how SIMD works, but I don't.
  3. I don't know in advance the total number of textures my app will be using. I wanted to use this approach but it turned out to be impractical because D3D11 hardware may not allow binding more than 128 SRVs to the shaders. Next I decided to keep all the texture SRV's in a default heap that is invisible to the shaders, and when I need to render a texture I would copy its SRV from the invisible heap to another heap that is bound to the pixel shader, but this also seems impractical because ID3D12Device::CopyDescriptorsSimple cannot be used in a command list. It executes immediately when it is called. I would need to close, execute and reset the command list every time I need to switch the texture. What is the correct way to do this?
  4. In DirectX 9 I would use this input layout: { 0, 0, D3DDECLTYPE_SHORT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 } with this vertex shader slot: float4 Position : POSITION0 That is, I would use the vertex buffer format SHORT4 for corresponding float4 in the shader and everything would work great. In DirectX 12 this does not work. When I use the format DXGI_FORMAT_R16G16B16A16_SINT with float4 in the shader, I get all zeros in the shader. If I use int4 in the shader instead of float4, I get numbers in the shader but they are messed up. I can't figure out exactly what is wrong with them because I can't see them. The shader debugger of visual studio keeps crashing. The debugger layer does not say anything when I use int4, but it gives a warning when I use float4. How can I use the R16G16B16A16_SINT input layout?
  5. I know some people will find this question stupid, but it is not stupid if you don't know how SetPipelineState exactly works. If I switch the current PSO to another one with different shaders and root signatures, will that be more expensive than switching to a PSO with the same shaders and root signatures as the current PSO? In other words: does switching the shaders and root signatures add to the overhead of switching the PSO?
  6. I can't delete posts. I can't edit posts. Is this forum supposed to be this way?
  7. I used D3DTADDRESS_CLAMP and the problem seems to be largely gone.
  8. Hello, I am drawing polygons that have different textures with DirectX9. If I leave the default texture filtering (D3DTEXF_POINT), I get pixelated textures. If I change the filtering to D3DTEXF_ANISOTROPIC, I get spaces or lines between the textures.   How can I get rid of the lines between the textures when I use anisotropic filtering? I tried setting D3DSAMP_MAXANISOTROPY to the maximum value on my device but that did not seem to do anything. Before using DirectX9 I was using the Viewport3D of the .NET Framework, which is built on DirectX9. I had the same problem but I managed to solve it by using RenderOptions.SetBitmapScalingMode(Viewport3D, BitmapScalingMode.Fant). This command gave me textures that were non-pixelated and also without spaces between them. I wonder how I could do the same in DirectX9.
  9. Yes the line does not move at all. Strangely, it does move if I put the transform command inside a loop and use the loop variable as a parameter. The following works: for (FLOAT i = 0; i <= 1024; i += 1024) {         D3DXMATRIXA16 TranslationMatrix;         D3DXMatrixTranslation(&TranslationMatrix, i, 0.0F, 0.0F);         Device->SetTransform(D3DTS_WORLD, &TranslationMatrix);         Device->DrawPrimitive(D3DPT_LINELIST, 0, 1); } Perhaps there is something wrong in the organization of my code. I don't understand what is going on.
  10. Hello, I have two vertices for a line: CUSTOMVERTEX LineVertices[] =     {         { 2560.0F, 0.0F, -2560.0F, 0xFFFFFFFF },         { 2560.0F, 0.0F, 2560.0F, 0xFFFFFFFF }     }; I use a translation matrix on the vertices: D3DXMATRIXA16 TranslationMatrix; D3DXMatrixTranslation(&TranslationMatrix, 1024.0F, 0.0F, 0.0F); Device->SetTransform(D3DTS_WORLD, &TranslationMatrix)); Device->DrawPrimitive(D3DPT_LINELIST, 0, 1); The translation will not work on the X or Z coordinates, but it will work on the Y, which is set to 0 in the vertices. Why doesn't it work?