Jump to content
  • Advertisement

ajmiles

Member
  • Content count

    422
  • Joined

  • Last visited

Community Reputation

3397 Excellent

About ajmiles

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Social

  • Twitter
    @adamjmiles
  • Steam
    ajmiles

Recent Profile Visitors

6814 profile views
  1. That makes it sound like there isn't still a good reason that the semantics still exist. They're still required for binding the Input Layout to the Vertex Shader. A struct declared in a shader needs to markup the members of that struct such that the driver/hardware know which member corresponds to which element in the input layout. A VS_INPUT struct can declare its elements in any order and even omit elements that may be included in the input layout, the mapping from the Input Layout to the VS Input is handled by the input semantic names and without them some other system would need to be added to achieve this.
  2. Have you tried running it with the Software WARP renderer? What about another GPU or another machine? Are the graphics drivers up to date?
  3. To be honest, I'm not sure how you're getting red in your "working" case. PIX doesn't show any referenced textures for your pixel shader, yet the pixel shader does appear to be written as if it's expecting 6 textures to be bound. They should be showing up here in the PS group as a descriptor table / SRVs. I've had a look into why they aren't showing up, and I'm a little worried by how your root signature looks. For the root parameter describing the descriptor table, the "OffsetInDescriptorsFromTableStart" field is set to 2147483648, which is 0x80000000. I would have expected this value to be either 0 or D3D12_DESCRIPTOR_RANGE_OFFSET_APPEND, which is 0xFFFFFFFF. There's always the possibility that PIX is wrong, but could you check what you think you're setting this field to?
  4. Sharing a PIX capture with us would probably also solve the problem.
  5. Does that mean you still haven't figured out what's wrong?
  6. The Visual Studio Graphics Debugger (VSGD) should work fine for simple applications, but that particular feature is no longer under active development and is likely to become less useful to DX12 developers as time goes on. "PIX For Windows" is a separate standalone graphics profiler and debugger and is in constant development. I would recommend installing PIX For Windows and using this for any investigations you might want to do in the future.
  7. You can see the value of a root constant and how it changes with each draw, yes.
  8. Have you tried using PIX For Windows to debug this?
  9. You don't want to use D3D11_FILTER_MAXIMUM_MIN_MAG_MIP_POINT. You want D3D11_FILTER_MIN_MAG_MIP_POINT instead.
  10. This shader compiles just fine for me even if I uncomment the two lines you've marked as "failing line". What compile error are you getting? Are you using fxc from the Windows SDK? The one I'm using is: C:\Program Files (x86)\Windows Kits\10\bin\10.0.17134.0\x64>fxc /? Microsoft (R) Direct3D Shader Compiler 10.1 (using C:\Program Files (x86)\Windows Kits\10\bin\10.0.17134.0\x64\D3DCOMPILER_47.dll)
  11. Can you provide a shader that doesn't compile and we might be able to help?
  12. Yes, that'll run as you expect. Just bear in mind that not every piece of HW runs wave sizes of 64, so the number of waves that go down this path will vary from GPU to GPU.
  13. With PowerPC now largely consigned to the scrap heap in terms of hardware that might run a game, I think you can be pretty sure that little endian will cover all your x86 and ARM needs. If in the future it doesn't for some reason, either produce two sets of assets that are pre swapped into the right format or get the GPU to do the endian swapping for you on load rather than burden the CPU. With D3D12 (and Vulkan?) you can even setup your ShaderResourceViews to have an arbitrary swizzle on sampling, so for 'free' you could always swap RGBA back around to ABGR (and vice versa) without ever touching the underlying data.
  14. Reading a file a byte or a DWORD at a time is incredibly inefficient. I would liken that approach to issuing a Draw Call per Triangle when rendering a model rather than exploiting the fact that there's an API available to draw multiple triangles in a single call. When reading the pixel data, why are you not reading at least an entire mip's worth of bytes in a single call to ReadFile?
  15. ajmiles

    SSAO running slow

    WARP can be surprisingly fast sometimes. I've written and run little samples for days at a time and forgotten I'd hardcoded my "warp=true" codepath to true and only later discovered my error! Throw enough work at it though and you eventually realise you're not running on a GPU at all
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!