• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
    • By Axiverse
      I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 How can we properly separate UI thread from rendering/logic thread

This topic is 452 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Guys,

 

I am sorry if you feel this topic should be in another forum. But since it's only in windows, I guess here may be a good place to ask for it.

 

So the thing is I have a DX12 application need to keep running even if you are moving the window or resizing the window. If your windows messaging handling is in your rendering thread, your render will be paused when you move or resize the window (may be block in DefWindowProc)

 

My first approach is separating render thread and have all event handling in UI thread. But some data shared by both thread cause sync issue like ImGuiIO: ImGui event handling need to be called before or after rendering, but since they are get called in separate thread, you have to sync both thread at some point, which seems not an ideal solution to me.

 

So later I tried another approach: using a separate render thread, and have UI thread post needed msg to render thread' message queue by calling PostThreadMessage. So this time event handling is in the same thread as render thread, which looks good. But there are big problems: my render thread may lost important message like WM_LBUTTONDOWN/UP etc. if UI thread posting faster than render thread can process... even if I let my UI thread wait for render thread's ready signal...

 

Another big issue for my second approach is that not any msg can be post on to other thread's message queue. For example I am using MSFT interactioncontext library which need to handle WM_POINTEXXX events and sadly those event cannot post on to other thread...

 

Having so much trouble separating UI thread I feel like I must did something fundamentally wrong, so I come here for help, how to properly separate UI thread?

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement

Just put your window  and message loop in a separate thread by itself, and forward in your user code things that may be of interest to your app, ( like close request, fullscreen request, loss of focus, ... ).

 

I usually have two extra threads, one for the window, and one for a message only window to handle raw inputs, like that, you never stall on messages whatever happen in your game/render logic.

Share this post


Link to post
Share on other sites

Just put your window  and message loop in a separate thread by itself, and forward in your user code things that may be of interest to your app, ( like close request, fullscreen request, loss of focus, ... ).

 

I usually have two extra threads, one for the window, and one for a message only window to handle raw inputs, like that, you never stall on messages whatever happen in your game/render logic.

Thanks galop1n for such a prompt reply. 

 

Could you elaborate a little bit more on that? For example you have a model view camera class which respond to mouse related events, how to fit it into two thread scenario? make the whole camera class calls in UI thread and just have render thread pull view matrix from UI thread? or put camera related calls in render thread and have UI send all mouse event to it? ( You may miss important event when your render thread is slower than UI thread )

 

Or if you are familiar with ImGui, how to use it in this two thread model? (I found it's tricky to use ImGui in such cases)

 

Thanks

Share this post


Link to post
Share on other sites

GUI are usually event driven, so unless the GUI controls are actively being interacted with, they are usually in some idle loop. A few application, if not most, will drive their event loop during the idle phase, or actively drive it whenever a control invalidate the view.

Share this post


Link to post
Share on other sites

GUI are usually event driven, so unless the GUI controls are actively being interacted with, they are usually in some idle loop. A few application, if not most, will drive their event loop during the idle phase, or actively drive it whenever a control invalidate the view.

Thanks cgrant,

 

If I get it correctly, my case will be special then since I uses immediate mode gui: ImGui which need to call it's event handle function every frame (correct me if I got it wrong)

Share this post


Link to post
Share on other sites

The windows event are not supposed to last long, that is by design, any long processing is to be banned from that message handling. You are perfectly fine recording input events from one rendered frame to an other, and resolve the actions in a single aggregate update once and for good.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement