• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
    • By Axiverse
      I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
    • By NikiTo
      AMD forces me to use MipLevels in order to can read from a heap previously used as RTV. Intel's integrated GPU works fine with MipLevels = 1 inside the D3D12_RESOURCE_DESC. For AMD I have to set it to 0(or 2). MSDN says 0 means max levels. With MipLevels = 1, AMD is rendering fine to the RTV, but reading from the RTV it shows the image reordered.

      Is setting MipLevels to something other than 1 going to cost me too much memory or execution time during rendering to RTVs, because I really don't need mipmaps at all(not for the 99% of my app)?

      (I use the same 2D D3D12_RESOURCE_DESC for both the SRV and RTV sharing the same heap. Using 1 for MipLevels in that D3D12_RESOURCE_DESC gives me results like in the photos attached below. Using 0 or 2 makes AMD read fine from the RTV. I wish I could sort this somehow, but in the last two days I've tried almost anything to sort this problem, and this is the only way it works on my machine.)


  • Advertisement
  • Advertisement

DX12 Any good DirectX 12 "2D Platform" Tutorials?

Recommended Posts

Hey everyone,

 

I was just curious if anyone has any links/pdfs, info, etc.  Heck, even a good book, since I can't seem to find one on DirectX 12.....and the Microsoft Online Documentation (though good and thorough) seems to be geared towards developers who are converting projects (and large ones at that...) from DX 11 to DX 12.

 

I am currently working my way through some of the DirectX 12 samples, and I've figured out a few things....haha, but I'm still just wanting to make a simple 2D game in DirectX 12 for Windows 10.  Any help would be appreciated, and of course, if I come up with something, I will post it here as well.

 

My background in programming is both un-impressive and extensive.  The last time I looked at Direct X, was around Direct X version 7, 8 and 9.  I moved to Dark GDK for a while, then Direct X 10 and 11 sprang up.  I moved to Windows 10, and all my Dark GDK projects (Dark GDK) won't compile on Visual Studio 2015.  I've decided to try my hand at the latest and greatest DirectX 12 and would like to start simple and small with 2D stuff.  I can also program in many languages, though I prefer C/C++ or C# (learning that one...of course, when does learning stop? haha).

 

All of my projects have been hobbies, and I just program for fun, however, I am getting older, and if I can actually put something interesting into production, I'll invest a little after the initial demo, and get some help to actually publish a title.  Well, cheers.

 

Thanks,

Jeff

Share this post


Link to post
Share on other sites
Advertisement

Alrighty, I was just checking.  Perhaps I can write a tutorial on my adventures in learning and creating a 2D tutorial (it's easily possible in 3D).  I am actually having a good time, with more successes than frustration (overall).

 

Thanks,

Jeff

Share this post


Link to post
Share on other sites

Don't forget D3D little borther: https://msdn.microsoft.com/en-us/library/windows/desktop/dd370990(v=vs.85).aspx

 

Remember that if you want to use low level APIs (even if they come with one or more layer of abstraction) you will need to handle manually tons of things as you will with Direct3D.

Edited by Alessio1989

Share this post


Link to post
Share on other sites
The general approach to creating a 2D game using a 3D API is the same as ever -- textured, screen-aligned quads, batching, etc... But the directions in which D3D 12 reached aren't really things that 2D games needed -- utmost performance by getting closer to the metal. If this is a learning venture have fun, if this is a business venture you'll have a lot more to gain by taking advantage of the larger user base of more-established APIs.

Share this post


Link to post
Share on other sites

The general approach to creating a 2D game using a 3D API is the same as ever -- textured, screen-aligned quads, batching, etc... But the directions in which D3D 12 reached aren't really things that 2D games needed -- utmost performance by getting closer to the metal. If this is a learning venture have fun, if this is a business venture you'll have a lot more to gain by taking advantage of the larger user base of more-established APIs.

 

The only exception I can imagine are mobile games where low-overhead APIs could potentially improve battery life a lot even on 2D games ( : ...But AFIK there are not any D3D12 drivers on Windows 10 mobile yet...

Edited by Alessio1989

Share this post


Link to post
Share on other sites

The only exception I can imagine are mobile games where low-overhead APIs could potentially improve battery life a lot even on 2D games


That's a valid point to consider, though I don't know how much simpler games stand to gain.

One thing to consider more generally is that Direct3D 12 and Vulkan won't raise your frame rate if you are (or would be) GPU-bound. 90% of the benefit is on the CPU-side, and the biggest gains there only come if you're willing and able to multithread your engine. In fact, most naive ports from 11 to 12 are performing 10-25 *worse* to start and it takes some moderate refactoring to get back to a rough par in terms of frame rate (with the advantage usually being a higher and more stable minimum frame rate and lower CPU load), but it takes significant rework to get those big gains that were promised, and they're mostly only there to be had if you were CPU bound in the first place.

Share this post


Link to post
Share on other sites

Completely agree, but do not forgot that low-overhead APIs can potentially been a big dial on mobile, and on UMA systems in general, where the TDP is shared/balanced between CPU and iGPU. Even reducing the CPU overhead can potentially both reduce power consumption and increase GPU performance raising the GPU clock speed. Also, Vulkan looks like having a pretty support for hardware tiled rendering (D3D12 too, but it does not have anything like renderpass).

Edited by Alessio1989

Share this post


Link to post
Share on other sites

Gonna rock the boat a bit here, but I don't think wanting to make a 2D game is a legitimate reason to not use DX12. If you want to write graphics engines, picking up DX12 from scratch isn't going to be easy, but with enough time you'll learn about a lot of important things that you basically never will otherwise if you constantly shy away because there's an easier way to do things. 

UE4 isn't going anywhere any time soon, so the same argument can be made for learning DX11 or pretty much any graphics API. Why bother? In a weird way it creates a vicious cycle as more new programmers are steered away from DX12, UE4 (and Unity and other big name engines) becomes even more entrenched, etc.

 

Of course, if you have no interest in writing your own graphics engine, then don't write your own graphics engine.

Edited by Dingleberry

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement