• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
    • By Axiverse
      I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 D3D12 warp driver on windows 7

This topic is 928 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello.

 

By some reason I'd like to use dx12 on windows 7. I thought I can just use warp driver dll - after all it's just a software implementation (which proves why it works on win10 with wddm less than 2.0). With Dependency Walker I checked which libs I need. I grab this libs on win10. But this is not enough - it seems that libs require another libs and it's hard to tell which. So maybe somebody already managed how to do this? Or maybe I'm totally wrong and it's impossible to achieve?

Share this post


Link to post
Share on other sites
Advertisement

I do not think you can do this. You still need WDDM 2.0 and related DirectX Graphics kernel bits to run a WARP12 adapter device. Probably some of the bits you need requires or are part of Windows 10 kernel.

Edited by Alessio1989

Share this post


Link to post
Share on other sites

But I'm able to run warp on integrated Intel gpu with wddm 1.3 (on win10).

 

WDDM is not just the display driver interface, it also involve the compositor (the DWM). Windows 10 comes with a new compositor and a new presentation model. I am also pretty sure you need the correlated DXGI and DXG kernel bits installed. There is also the new resident memory model which is not supported under Windows 7/8/8.1.

 

Do you have any particular - non subjective - reason to not use Windows 10? (ie: technical issues or whatever..)

Edited by Alessio1989

Share this post


Link to post
Share on other sites

But I'm able to run warp on integrated Intel gpu with wddm 1.3 (on win10).

Considering this, you MIGHT be able to hack a lot until you get it to run on a WDDM 1.3 capable OS.
But even then, WDDM 1.3 shipped with Windows 8.1
Windows 7 supports up to WDDM 1.1

The only way to get it to run on Win 7 is to heavily reverse engineer the DLLs and hack a lot, until you end up writing your own pseudo OS layer, like Wine does on Linux. Definitely not something quick or trivial. Edited by Matias Goldberg

Share this post


Link to post
Share on other sites

This won't work.

 

The d3d10warp.dll and even the optional d3d12warp.dll that are shipped with Windows 10 builds have tight ties to OS components that only exist in Windows 8.1 onwards. We also removed 'old DDI table support' from these drivers to minimize our testing and old code that is not longer used. This means the only DDI tables that these Win 10 binaries expose are a WDDM 1.3 table (Win 8.1)  and a WDDM 2.0 table (Win 10), neither will be recognized by the runtime in Windows 7. You can take either of these binaries and run them on Win8.1 (after renaming d3d12warp.dll to d3d10warp.dll) - But that still won't give you D3D12 on anything other than Windows 10, there is a *lot* more to D3D12 in the kernel / runtime / OS.

Share this post


Link to post
Share on other sites

Thank you guys. I believe you and leave that crazy idea.

 


Do you have any particular - non subjective - reason to not use Windows 10? (ie: technical issues or whatever..)

On my work I have only Win7 but in my free time wanted to try/test something.

Share this post


Link to post
Share on other sites
Well, you can still use a VM and create a WARP device, obviously the performance will be atrocious if you want create anything that is not a triangle or a single cube, but it should be enough to learn the very basics of the API.

Share this post


Link to post
Share on other sites

Well, you can still use a VM and create a WARP device, obviously the performance will be atrocious if you want create anything that is not a triangle or a single cube, but it should be enough to learn the very basics of the API.

WARP isn't that bad. Give the VM enough CPUs and it should run pretty well. I've written entire small test apps before with WARP left on by accident and not realised until I was almost done.

Share this post


Link to post
Share on other sites

Well, you can still use a VM and create a WARP device, obviously the performance will be atrocious if you want create anything that is not a triangle or a single cube, but it should be enough to learn the very basics of the API.

WARP isn't that bad. Give the VM enough CPUs and it should run pretty well. I've written entire small test apps before with WARP left on by accident and not realised until I was almost done.

I remember I was able to run the multi-threading samples at ~1FPS on a i5 Ivy Bridge laptop using HyperVisor on Windows 8.1 last Spring (and it was not even the RTM).
Yes, that is not so bad at all to learning the very base of the API. Without a VM in the middle I guess WARP12 runs pretty well on a mid-range recent CPU, it is also suitable to try heterogeneous multi-adapter coding. Edited by Alessio1989

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement