Sign in to follow this  
Volgogradetzzz

DX12 D3D12 warp driver on windows 7

Recommended Posts

Volgogradetzzz    1101

Hello.

 

By some reason I'd like to use dx12 on windows 7. I thought I can just use warp driver dll - after all it's just a software implementation (which proves why it works on win10 with wddm less than 2.0). With Dependency Walker I checked which libs I need. I grab this libs on win10. But this is not enough - it seems that libs require another libs and it's hard to tell which. So maybe somebody already managed how to do this? Or maybe I'm totally wrong and it's impossible to achieve?

Share this post


Link to post
Share on other sites
Alessio1989    4634

I do not think you can do this. You still need WDDM 2.0 and related DirectX Graphics kernel bits to run a WARP12 adapter device. Probably some of the bits you need requires or are part of Windows 10 kernel.

Edited by Alessio1989

Share this post


Link to post
Share on other sites
Alessio1989    4634

But I'm able to run warp on integrated Intel gpu with wddm 1.3 (on win10).

 

WDDM is not just the display driver interface, it also involve the compositor (the DWM). Windows 10 comes with a new compositor and a new presentation model. I am also pretty sure you need the correlated DXGI and DXG kernel bits installed. There is also the new resident memory model which is not supported under Windows 7/8/8.1.

 

Do you have any particular - non subjective - reason to not use Windows 10? (ie: technical issues or whatever..)

Edited by Alessio1989

Share this post


Link to post
Share on other sites
Matias Goldberg    9580

But I'm able to run warp on integrated Intel gpu with wddm 1.3 (on win10).

Considering this, you MIGHT be able to hack a lot until you get it to run on a WDDM 1.3 capable OS.
But even then, WDDM 1.3 shipped with Windows 8.1
Windows 7 supports up to WDDM 1.1

The only way to get it to run on Win 7 is to heavily reverse engineer the DLLs and hack a lot, until you end up writing your own pseudo OS layer, like Wine does on Linux. Definitely not something quick or trivial. Edited by Matias Goldberg

Share this post


Link to post
Share on other sites
Andy Glaister    136

This won't work.

 

The d3d10warp.dll and even the optional d3d12warp.dll that are shipped with Windows 10 builds have tight ties to OS components that only exist in Windows 8.1 onwards. We also removed 'old DDI table support' from these drivers to minimize our testing and old code that is not longer used. This means the only DDI tables that these Win 10 binaries expose are a WDDM 1.3 table (Win 8.1)  and a WDDM 2.0 table (Win 10), neither will be recognized by the runtime in Windows 7. You can take either of these binaries and run them on Win8.1 (after renaming d3d12warp.dll to d3d10warp.dll) - But that still won't give you D3D12 on anything other than Windows 10, there is a *lot* more to D3D12 in the kernel / runtime / OS.

Share this post


Link to post
Share on other sites
Volgogradetzzz    1101

Thank you guys. I believe you and leave that crazy idea.

 


Do you have any particular - non subjective - reason to not use Windows 10? (ie: technical issues or whatever..)

On my work I have only Win7 but in my free time wanted to try/test something.

Share this post


Link to post
Share on other sites
Alessio1989    4634
Well, you can still use a VM and create a WARP device, obviously the performance will be atrocious if you want create anything that is not a triangle or a single cube, but it should be enough to learn the very basics of the API.

Share this post


Link to post
Share on other sites
ajmiles    3319

Well, you can still use a VM and create a WARP device, obviously the performance will be atrocious if you want create anything that is not a triangle or a single cube, but it should be enough to learn the very basics of the API.

WARP isn't that bad. Give the VM enough CPUs and it should run pretty well. I've written entire small test apps before with WARP left on by accident and not realised until I was almost done.

Share this post


Link to post
Share on other sites
Alessio1989    4634

Well, you can still use a VM and create a WARP device, obviously the performance will be atrocious if you want create anything that is not a triangle or a single cube, but it should be enough to learn the very basics of the API.

WARP isn't that bad. Give the VM enough CPUs and it should run pretty well. I've written entire small test apps before with WARP left on by accident and not realised until I was almost done.

I remember I was able to run the multi-threading samples at ~1FPS on a i5 Ivy Bridge laptop using HyperVisor on Windows 8.1 last Spring (and it was not even the RTM).
Yes, that is not so bad at all to learning the very base of the API. Without a VM in the middle I guess WARP12 runs pretty well on a mid-range recent CPU, it is also suitable to try heterogeneous multi-adapter coding. Edited by Alessio1989

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By NikiTo
      Would it be a problem to create in HLSL ~50 uninitialized arrays of ~300000 cells each and then use them for my algorithm(what I currently do in C++(and I had stack overflows problems because of large arrays)).
      It is something internal to the shader. Shader will create the arrays in the beginning, will use them and not need them anymore. Not taking data for the arrays from the outside world, not giving back data from the arrays to the outside world either. Nothing shared.
      My question is not very specific, it is about memory consumption considerations when writing shaders in general, because my algorithm still has to be polished. I will let the writing of HLSL for when I have the algorithm totally finished and working(because I expect writing HLSL to be just as unpleasant as GLSL). Still it is useful for me to know beforehand what problems to consider.
    • By mark_braga
      I am working on optimizing our descriptor management code. Currently, I am following most of the guidelines like sorting descriptors by update frequency,...
      I have two types of descriptor ranges: Static (DESCRIPTOR_RANGE_FLAG_NONE) and Dynamic(DESCRIPTORS_VOLATILE). So lets say I have this scenario:
      pCmd->bindDescriptorTable(pTable); for (uint32_t i = 0; i < meshCount; ++i) { // descriptor is created in a range with flag DESCRIPTORS_VOLATILE // setDescriptor will call CopyDescriptorsSimple to copy descriptor handle pDescriptor[i] to the appropriate location in pTable pTable->setDescriptor("descriptor", pDescriptor[i]); } Do I need to call bindDescriptorTable inside the loop?
    • By nbertoa
      I want to implement anti-aliasing in BRE, but first, I want to explore what it is, how it is caused, and what are the techniques to mitigate this effect. That is why I am going to write a series of articles talking about rasterization, aliasing, anti-aliasing, and how I am going to implement it in BRE.
      Article #1: Rasterization
      All the suggestions and improvements are very welcome! I will update this posts with new articles
    • By mark_braga
      I am working on optimizing barriers in our engine but for some reason can't wrap my head around split barriers.
      Lets say for example, I have a shadow pass followed by a deferred pass followed by the shading pass. From what I have read, we can put a begin only split barrier for the shadow map texture after the shadow pass and an end only barrier before the shading pass. Here is how the code will look like in that case.
      DrawShadowMapPass(); ResourceBarrier(BEGIN_ONLY, pTextureShadowMap, SHADER_READ); DrawDeferredPass(); ResourceBarrier(END_ONLY, pTextureShadowMap, SHADER_READ); // Uses shadow map for shadow calculations DrawShadingPass(); Now if I just put one barrier before the shading pass, here is how the code looks.
      DrawShadowMapPass(); DrawDeferredPass(); ResourceBarrier(NORMAL, pTextureShadowMap, SHADER_READ); // Uses shadow map for shadow calculations DrawShadingPass(); Whats the difference between the two?
      Also if I have to use the render target immediately after a pass. For example: Using the albedo, normal textures as shader resource in the shading pass which is right after the deferred pass. Would we benefit from a split barrier in this case?
      Maybe I am completely missing the point so any info on this would really help. The MSDN doc doesn't really help. Also, I read another topic 
      but it didn't really help either. 
    • By ZachBethel
      I'm reading through the Microsoft docs trying to understand how to properly utilize aliasing barriers to alias resources properly.
      "Applications must activate a resource with an aliasing barrier on a command list, by passing the resource in D3D12_RESOURCE_ALIASING_BARRIER::pResourceAfter. pResourceBefore can be left NULL during an activation. All resources that share physical memory with the activated resource now become inactive or somewhat inactive, which includes overlapping placed and reserved resources."
      If I understand correctly, it's not necessary to actually provide the pResourceBefore* for each overlapping resource, as the driver will iterate the pages and invalidate resources for you. This is the Simple Model.
      The Advanced Model is different:
      Advanced Model
      The active/ inactive abstraction can be ignored and the following lower-level rules must be honored, instead:
      An aliasing barrier must be between two different GPU resource accesses of the same physical memory, as long as those accesses are within the same ExecuteCommandLists call. The first rendering operation to certain types of aliased resource must still be an initialization, just like the Simple Model. I'm confused because it looks like, in the Advanced Model, I'm expected to declare pResourceBefore* for every resource which overlaps pResourceAfter* (so I'd have to submit N aliasing barriers). Is the idea here that the driver can either do it for you (null pResourceBefore) or you can do it yourself? (specify every overlapping resource instead)? That seems like the tradeoff here.
      It would be nice if I can just "activate" resources with AliasingBarrier (NULL, activatingResource) and not worry about tracking deactivations.  Am I understanding the docs correctly?
      Thanks.
  • Popular Now