Advertisement Jump to content
  • Advertisement

pcmaster

Member
  • Content Count

    304
  • Joined

  • Last visited

Community Reputation

1061 Excellent

2 Followers

About pcmaster

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree. Also, congratz on good work on your engine in general!
  2. A very interesting introduction. Thank you for that. I was very excited right until the end where you revealed that you aren't trying to fully solve what the driver is doing for us in D3D11, especially in the context of multi-threaded command recording. Good for you! Good in overall! :) I've got my hands on a multi-platform engine built around the logic of D3D11 and, unfortunately, high-level users sometimes do all kinds of stuff to resources on all the threads in parallel. This brought me a lot of headache because trying to support that is very difficult, error-prone and certainly won't ever be more performant than the battle-hardened drivers. Just not doing that, that is no transitions inside of parallel recording jobs, but only on their boundaries, is the only way to go, obviously.
  3. I'm afraid, I can't. I unfortunately don't have time to try what it does. In a version I have, both DepthsDiag and DepthsAxis are divided by CenterDepth, which is about the only significant difference. I expect it'll just cause a bit different edge in the horizontal/vertical directions.
  4. pcmaster

    Vulkan and DX12 similarities

    One interesting evolution with D3D12 and VK is that with them it's almost impossible for an unfamiliar/non-render programmer to "throw something together" (in their pure APIs, without some encapsulation/simplification frameworks on top of them). It's just too painful with so many things the drivers were doing for us in the past which just aren't there anymore -- watch out, I'm not saying it's wrong, on the contrary, and I surely don't want to start a flame! :)
  5. Have you looked at https://en.wikipedia.org/wiki/Sobel_operator for an explanation of the Sobel filter? The final value "Sobel" is how much 'edge' there is (1: edge, 0: no edge -- or vice versa, not sure by heart). The rest is just a way to present it to the user/screen. Just try to solve the equation on paper for Sobel == 1 and Sobel == 0 and you'll see what it does with the colours.
  6. @Vilem Otte He says, he's going to render a 2D Mario-like sprite :) So I think that's about how the scene looks like.
  7. The only possible answer is: implement both and measure. From my experience, discarding big groups of transparent pixels usually helps on the GCN architecture, but it "depends". Measure :)
  8. No, it isn't safe to assume that. A render target/depth-stencil target seems to differ from a shader resource. The GPU needs to know the RTV/DSV sizes, tiling modes and formats, and cache or bus settings for example, when scheduling the individual threads/groups and the other 'fixed' GPU settings, all of it upfront. A shader resource descriptor (SRV, UAV, CBV) on the other hand, is usually just some 32-64 bytes (on AMD GCN) needed to manipulate it quite simply using the texture units or simple loads. It's just the address, format, size and a few flags. So it's safe to fetch these SRV descriptors from anywhere in the memory, anyhow (even hardcode or conjure in the shader code itself), and do that as many times as needed. You exchange your SRVs on the GPU much more often than you change the render passes (and hence RTVs/DSVs) by the CPU. A GPU descriptor handle is just a simple pointer, nothing more. A pointer to those few dwords with the format, size, etc, which a shader can dereference and feed into the texturing load/store instructions. A CPU descriptor handle is the same concept, interestingly, it's just manipulated by the CPU, much less frequently (ideally), to set a 'greater' state of the GPU. On some architectures, they might be the same, soon, but it isn't the case on the contemporary GPUs.
  9. That's a very good question! The driver might make a 'copy' of your descriptor, or rather it compose what it needs for the specific GPU.. The question is why not do this inside ID3D12Device::CreateRenderTargetView? Any insights, I'm also interested in why they did it this way.
  10. I think you should create the resource first with DXGI_FORMAT_R32G32B32A32_TYPELESS and then create SRVs with the final types DXGI_FORMAT_R32G32B32A32_UINT or DXGI_FORMAT_R32G32B32A32_FLOAT from that resource. Also, the texture decriptor will need to have D3D11_BIND_UNORDERED_ACCESS and/or D3D11_BIND_SHADER_RESOURCE depending on how you want to access it in your shaders.
  11. pcmaster

    To prototype or not to prototype?

    I would suggest taking an intermediate step and making a much smaller, offline, single player game first. You mentioned you have zero experience and taking on a multiplayer game of an uncertain scope might (and WILL!) demoralise you. My suggestion is to make an arguably uninteresting game "clone" (don't stop reading here) first - snake? tetris? space invaders? flappy bird? Anything you liked with a well defined scope and features. Make it work, make it playable. By doing this, you'll learn what challenges lie before you, when making a game. You will NOT be wasting considerable amounts of time making something that's already been there. No. You'll learn your tools, you'll be gaining invaluable experience which will later allow you to assess what needs to be done (for a complete product) and how much it costs (time, resources and/and money). TL;DR With zero experience, take small steps, finish something very small first and well defined and be surprised how much longer than expected it takes.
  12. There's a special case though, where reading and writing to the same texture are possible (on some architectures?): it's a single draw-call over the whole texture (no overdraw) the previous writes to the memory have already settled in the main memory every pixel shader reads only its own pixel (and no other pixels!) the results of this draw-call become usable only after all pixels have finished executing and everything has settled in the main memory Point 2) is important for point 3), in order to not read stale values. This also goes for compute shaders.
  13. Unfortunately, generally, you cannot read a texture aliased to the memory of a render-target you're writing to in that very draw-call. The reasons include: * the order of individual pixels being shaded can't be predicted * the cache hierarchy used to write RTVs and read SRVs can (and will be) different - cache flushing to main memory is needed So you have all kinds of hazards.
  14. pcmaster

    different formats for glsl color

    Have you tried this? http://lmgtfy.com/?q=opengl+pixel+output+uint 😈 Okay, so I'm not just evil: #version 440 layout(location = 0) out uvec4 rgba0; https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)
  15. There can be no resorting to anything, the promise of contiguity shall not be broken - if you demand a block, you're guaranteed to get a block or nothing. Imagine that the pool administred by an allocator is 16B and the granularity is 1B with zero overhead (totally fine for our demonstration). You allocate eight times 2B. All is good. Then you deallocate every other allocation you got. All is good, there's 8B free. Yet, when you now try to allocate 4B, the allocation shall fail and you get nothing. It is THAT simple.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!