• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
    • By Axiverse
      I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 Specific config on DX12 swapchain for working through RDP

This topic is 581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey Guys,

 

Recently I run some of my working DX12 programs remotely through Microsoft Remote Desktop(RDP) and found out they crashed probably during on first present call (the error msg I got said nvwgf2umx.dll thrown exception access violation reading location 0x0000000000000010).

The program can run properly locally and also can run properly through Chrome Remote Desktop.

 

My guess is that Chrome Remote Desktop just run the program locally and send 'screen shot' while RDP is doing something differently which need extra care when DX creating the swap-chain?

 

Any thought? Thanks

 

Share this post


Link to post
Share on other sites
Advertisement

My program works fine over RDP. I just reviewed history of commits in my project. I didn't find anything related to bugs when working over RDP.

 

So, I can give just general suggestions. Check HRESULTs for errors, enable debug layer, if you haven't enabled it yet, don't forget that you can't switch to full-screen mode when working over RDP.

Edited by red75prime

Share this post


Link to post
Share on other sites

you can't switch to full-screen mode when working over RDP

 

Thanks for the reply, so it seems RDP did do something differently compare to launch the program locally. Also my debug layer is enabled, and yes I should check all possible HRESULTs 

Share this post


Link to post
Share on other sites

so it seems RDP did do something differently compare to launch the program locally.

 

It is not RDP only thing. If your computer has two videocards and you render on first, but display is attached to second, you can't switch swap-chain into full-screen mode too.

Share this post


Link to post
Share on other sites

It is not RDP only thing. If your computer has two videocards and you render on first, but display is attached to second, you can't switch swap-chain into full-screen mode too
 

Thanks red75prime. I am not super familiar with hardware related swap-chain details. Just curious what happened when we switch to fullscreen? it's not like have borderless window cover the whole monitor? Why fullscreen only enabled when the display is attached to the videocard we are rendering to? Sounds like fullscreen mode should render faster compare to borderless fullscreen....

 

Sorry to dump all those questions to you, I just want to learn~~

 

Thanks

Share this post


Link to post
Share on other sites

Actually, there are 2 full-screen modes. Borderless full-screen window is one of them, but I had in mind "true full-screen mode", which is set by passing TRUE into IDXGISwapChain::SetFullscreenState() or by setting DXGI_SWAP_CHAIN_DESC Windowed field to FALSE when creating swap-chain.

 

"True full-screen mode" can be more efficient by eliminating data transfer from swap-chain buffers to Desktop Window Manager, but there are many details to that.

You should consult https://msdn.microsoft.com/en-us/library/windows/desktop/bb205075(v=vs.85).aspx to get up-to date information on the topic. My knowledge of this is a bit stale.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement