Sign in to follow this  

DX11 Fullscreen Direct3D 11 multi device (gpu), multi swap chain per device rendering

This topic is 1527 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In my application I seem unable to achieve more than one device (GPU) in fullscreen at the same time. After much testing and googling I still remain puzzled. Windowed mode is not an option for me, and multi device is essential. Using Windows 8.
 
I can get multiple swap chains on a single device to go fullscreen at the same time, but as soon as more than one device is going fullscreen, a maximum of one device remains in fullscreen mode.
 
I also found this blog stating the problem:
 
Is it possible to achieve fullscreen rendering on multiple devices using DirectX 11 on Windows 8? If yes, how would that be achieved? If not, what's the rationale behind it?
 
Found this post stating that DXGI 1.1 has issues with multi device fullscreen rendering, however I have not found information about DXGI 1.2 or 1.3 and if this issue has been fixed.
 
 
Looking for solutions for DirectX 11, I found a webpage stating that DirectX 9 could be used in fullscreen mode on multiple devices if some restrictions are met:
 
"... The practical implication is that a multiple monitor application can place several devices in full-screen mode, but only if all these devices are for different adapters, were created by the same Direct3D9 object, and all share the same focus window..."
 
 
Are there similar restrictions as with DX9 for DX11 on Windows 8 and how could they be met?
 
If fullscreen rendering on multiple devices is not possible using DirectX 11 (for some reason I would like to know), could it be possible to interop between DX9 and DX11 to get it working? 
 
I also read in a blog that Windows Vista might work with multiple fullscreen devices since it uses DXGI 1.0. Could this be correct? 
 
 
I would really appreciate some comments on this issue.
 
Best regards,
Jørn Skaarud Karlsen

Share this post


Link to post
Share on other sites

Fake it with fullscreen borderless windows.

From your links it seems to have been this way for a few years so I doubt it will work again soon.

 

Vista does work, I remember the DX multimonitor sample working correctly back then.

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites
One way you can handle is to called ChangeDisplayModeSettingEx to change the monitor screen resolution. Then create a bordeless window that is the same size that you have you display setting set to. This will simulate exactly what directx does when you go fullscreen. At my job we have to handle 4 monitors in full screen and that is how i did it.

Share this post


Link to post
Share on other sites

Hi guys, thank you for the info!

 

A long story short, I am working with a special display that requires properly vsync'ed images running at 184Hz per output, and to achieve good update speed I need to use up to 8 outputs to feed information to the display. Using windowed mode I have not been able to achieve properly vsync'ed images at 184Hz using multiple outputs. The display is very sensitive to vsync, and fullscreen seems to work, but not windowed mode.

 

As I see it, my options are:
 
1) Get properly vsync'ed and refresh rate synchronized images from windowed mode - which I have not been able to achieve as of this moment.
2) Try using DX11 DX9 interop since DX9 might (?) be able to achieve fullscreen mode on several GPUs.
3) Go back to Windows Vista - not really a solution since it is not future proof.
4) Try using GPUs in SLI - this is a suboptimal solution since modern graphics cards only support 4 outputs where I would like to use all 8 inputs on the 3D display, and it would involve moving data between GPUs at high refresh rates - reducing performance.
5) Use more computers with one GPU in each (very much not an ideal solution).
 
If you know how to get 1) working, that would be highly appreciated since the ideal solution of fullscreen mode does not seem to be achieveable (windowed mode used to work with Windows XP, so it has been achieved with an older version, but have not been able to get it working properly with Windows 7/8 maybe due to dwm or other reasons).
 
Number 2) is also an option if it works. I would consider this a valid solution to my problem - though it is a hack.
 
Any last comments on this issue?

Share this post


Link to post
Share on other sites

Try calling Flush() before Presenting. Also try using SetMaximumFrameLatency and check the other available flags for swap chains and devices. It is difficult to get proper vsync in window mode, and I believe part of the reason is because frame buffering is disabled, though desktop composition could be a worse factor.

Also try using one device per graphics card but multiple swap-chains, or the other way around.

 

2) can probably work, should be fairly easy to test DX9 and if it works render to a shared texture in DX11. I know OpenGL can have multiple fullscreen windows (on a single graphics card with multiple monitor outputs).

 

 

Actually this might just be a case of bad docs.. I didn't run any proper tests, but based on the focus-window docs for DX9Ex I ran a quick experiment with MakeWindowAssociation on the DXGI factory and it seems there can be two different fullscreen windows simultaneously.

Share this post


Link to post
Share on other sites

This is a bit of a long shot, but you might be able to get option 1 working in windows 8 (and possibly not 8.1). It depends on a couple of things:

1. Is the problem with windowed mode that your application image is tearing, that is, not actually vsync'd?
2. At the same time the desktop itself is vsync'd fine, say if you drag a file explorer window around it displays normally without vsync problems?

If that is the case, the windowed mode problem may be caused by your application backbuffer presents being out of sync with the DWM presentation to the display. By default a windowed app backbuffer is blitted to the DWM surface (see: http://msdn.microsoft.com/en-us/library/windows/hardware/ff557525(v=vs.85).aspx ) and that blit isn't perfectly sync'd to the desktop's output to the display.

With DXGI 1.2 in windows 8 they added a newer backbuffer presentation mode, the flip mode, that shares the application backbuffer with DWM, rather than copy (blit) the backbuffer to DWM. You can find information on how to use it here - http://msdn.microsoft.com/en-us/library/windows/desktop/hh706346(v=vs.85).aspx .

Essentially you have to:

- Create a DXGI 1.2 factory.

- Create the device and swapchain seperately rather than with a single function (use D3D11CreateDevice() and factory->CreateSwapChainForHwnd() with a newer DXGI_SWAP_CHAIN_DESC1 struct).

- Use a minimum of 2 buffers in the swapchain desc, along with SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL.

- Don't use an MSAA backbuffer surface directly (if you need MSAA use a seperate render target for the resolve).

- Don't use more than one flip mode swapchain per hwnd.

That last requirement might be a problem for you, assuming you're currently using a single window and splitting it's client area amongst the many swapchains. I'd assume you could get around this by giving each swapchain it's own hwnd, then setting each window to maximised the size of it's swapchain's display area, and borderless. Basically tiling multiple borderless hwnd's to create the same appearance as a single large maximised borderless window.

Flip mode should solve any windowed vsync presentation issues, but DWM might still cause an issue by itself not refreshing at the full display rate (i.e. the desktop surface only refreshing every second frame or 92Hz). It might not be a problem in the first place, but if it is, apparently you can override the DWM refresh rate with a function called DwmSetPresentParameters ( http://msdn.microsoft.com/en-us/library/windows/desktop/aa969523(v=vs.85).aspx )

That function is only available in windows 8 and not windows 8.1, so depending whether you need to use it or not, limits your choice of those two OS's that support the flip mode backbuffer presentation method.

Edited by backstep

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
    • By xhcao
      Is Direct3D 11 an api function like glMemoryBarrier in OpenGL? For example, if binds a texture to compute shader, compute shader writes some values to texture, then dispatchCompute, after that, read texture content to CPU side. I know, In OpenGL, we could call glMemoryBarrier before reading to assure that texture all content has been updated by compute shader.
      How to handle incoherent memory access in Direct3D 11? Thank you.
    • By _Engine_
      Atum engine is a newcomer in a row of game engines. Most game engines focus on render
      techniques in features list. The main task of Atum is to deliver the best toolset; that’s why,
      as I hope, Atum will be a good light weighted alternative to Unity for indie games. Atum already
      has fully workable editor that has an ability to play test edited scene. All system code has
      simple ideas behind them and focuses on easy to use functionality. That’s why code is minimized
      as much as possible.
      Currently the engine consists from:
      - Scene Editor with ability to play test edited scene;
      - Powerful system for binding properties into the editor;
      - Render system based on DX11 but created as multi API; so, adding support of another GAPI
        is planned;
      - Controls system based on aliases;
      - Font system based on stb_truetype.h;
      - Support of PhysX 3.0, there are samples in repo that use physics;
      - Network code which allows to create server/clinet; there is some code in repo which allows
        to create a simple network game
      I plan to use this engine in multiplayer game - so, I definitely will evolve the engine. Also
      I plan to add support for mobile devices. And of course, the main focus is to create a toolset
      that will ease games creation.
      Link to repo on source code is - https://github.com/ENgineE777/Atum
      Video of work process in track based editor can be at follow link: 
       
       

    • By mister345
      I made a spotlight that
      1. Projects 3d models onto a render target from each light POV to simulate shadows
      2. Cuts a circle out of the square of light that has been projected onto the render target
      as a result of the light frustum, then only lights up the pixels inside that circle 
      (except the shadowed parts of course), so you dont see the square edges of the projected frustum.
       
      After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95
      to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value,
      which should range between .95 and 1.0.
       
      This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However,
      there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander
      and let me know, please help, thank you so much.
      float CalculateSpotLightIntensity(     float3 LightPos_VertexSpace,      float3 LightDirection_WS,      float3 SurfaceNormal_WS) {     //float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace);     float3 lightToVertex_WS = -LightPos_VertexSpace;          float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS)));     // METALLIC EFFECT (deactivate for now)     float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));     if(dotProduct > .95 /*&& metalEffect > .55*/)     {         return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));         //return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct;         //return dotProduct;     }     else     {         return 0;     } } float4 LightPixelShader(PixelInputType input) : SV_TARGET {     float2 projectTexCoord;     float depthValue;     float lightDepthValue;     float4 textureColor;     // Set the bias value for fixing the floating point precision issues.     float bias = 0.001f;     // Set the default output color to the ambient light value for all pixels.     float4 lightColor = cb_ambientColor;     /////////////////// NORMAL MAPPING //////////////////     float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex);     // Expand the range of the normal value from (0, +1) to (-1, +1).     bumpMap = (bumpMap * 2.0f) - 1.0f;     // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal!     float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal));     //////////////// LIGHT LOOP ////////////////     for(int i = 0; i < NUM_LIGHTS; ++i)     {     // Calculate the projected texture coordinates.     projectTexCoord.x =  input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;     projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;     if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))     {         // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.         depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r;         // Calculate the depth of the light.         lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w;         // Subtract the bias from the lightDepthValue.         lightDepthValue = lightDepthValue - bias;         float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue );         // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.         // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.             if(lightDepthValue < depthValue)             {                 // Calculate the amount of light on this pixel.                 float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i])));                 if(lightIntensity > 0.0f)                 {                     // Determine the final diffuse color based on the diffuse color and the amount of light intensity.                     float spotLightIntensity = CalculateSpotLightIntensity(                         input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!!                         cb_lights[i].lightDirection,                          bumpNormal/*input.normal*/);                     lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight                     //lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light                 }             }         }     }     // Saturate the final light color.     lightColor = saturate(lightColor);    // lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection));     // TEXTURE ANIMATION -  Sample pixel color from texture at this texture coordinate location.     input.tex.x += textureTranslation;     // BLENDING     float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex);     float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex);     float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex);     textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2));     // Combine the light and texture color.     float4 finalColor = lightColor * textureColor;     /////// TRANSPARENCY /////////     //finalColor.a = 0.2f;     return finalColor; }  
      Light_vs.hlsl
      Light_ps.hlsl
  • Popular Now