• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
    • By Axiverse
      I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 GPU video decoder

This topic is 889 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey

 

I need to decompress several videos in parallel for in-game content. Preferable the input format should be as compressed as possible while the decompression process doesn't consume too much processing power. So using the GPU seems like a good idea. This need to run on all vendors of PC and XB1. So I don't want to use any Nvidia or ATI specific solution. It also need to support DX12 so I can't use Media Foundation or any closed source API that is based on DX11.

 

Does anyone know of some open source project or demo/sample I could look into? Preferably I would like something that works straight out of the box or some code that's well contained enough so that it's easy to port to DX12. Big plus as well if it comes with no fees for commercial use.

 

 

Thanks!

Share this post


Link to post
Share on other sites
Advertisement

Actually DX12 is not a very important requirement. In case the code is open source I can just convert it myself.

Share this post


Link to post
Share on other sites
Well you can always try to use a shader decoder tongue.png
Since ALL DX12 GPUs are SM 5 capable, you can also have some fun with CS 5.0... But I never tried writing a shader decoder, so I am not aware of the issue you can encounter (I can guess GPU stalls, hazards, throttling etc...)..

Share this post


Link to post
Share on other sites
Let me just clarify. I don't expect to gain any particular decoding power from DX12 but since the rest of the engine runs DX12 it has to be that way.

Sure using CS seems like a good candidate.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement