• Advertisement
Sign in to follow this  

DX11 [SlimDX] Issues with KeyedMutex and WPF

This topic is 2368 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am porting my graphics system over to work inside a WPF application using the D3DImage, and I am running into an issue. My existing system uses DX11 for 3D rendering and Direct2D and DirectWrite to generate overlaid 2D content. This interop is handled via shared texture and coordinated with KeyedMutexes.

Currently, the port uses a modified version of the WpfSample10 D3DImageSlimDX to work with directX 11. Everything works fine when rendering the 3D scene. However, when I introduce the shared texture and keyedmutex, I get some strange behavior. It seems that any 3D commands issued after first acquiring the KeyedMutex for the texture only get executed roughly 50% of the frames, resulting in a blinking 3D scene. I have modified the WpfSample10 to show the minimum code necessary to reproduce this issue. [url="http://www.youtube.com/watch?v=K_BGJgbOZIw"]Here[/url] is a video that show the blinking problem. If I move the KeyedMutex acquire and release until after the scene is rendered, or remove it altogether, everything works as seen [url="http://www.youtube.com/watch?v=3h5oyuaNb-k"]here[/url]. I have tried several permutations, including drawing 3D objects before and after the acquire and release, and it always results in the same behavior: anything drawn after the acquire and release has about a 50% chance of showing up in the final frame.

The effect is easy to reproduce, just change the WpfSample10 to use a DirectX 11 device instead. Below are the changes I made to add the new shared texture and the keyedmutex, as well as the render code (I checked the return values for the acquire and release and it is always S_OK):

Initialization
[source lang="csharp"]
void InitD3D()
{
D3DDevice = new Direct3D11.Device(Direct3D11.DriverType.Hardware, Direct3D11.DeviceCreationFlags.Debug | Direct3D11.DeviceCreationFlags.BgraSupport, Direct3D11.FeatureLevel.Level_10_1);

Direct3D11.Texture2DDescription colordesc = new Direct3D11.Texture2DDescription();
colordesc.BindFlags = Direct3D11.BindFlags.RenderTarget | Direct3D11.BindFlags.ShaderResource;
colordesc.Format = DXGI.Format.B8G8R8A8_UNorm;
colordesc.Width = WindowWidth;
colordesc.Height = WindowHeight;
colordesc.MipLevels = 1;
colordesc.SampleDescription = new DXGI.SampleDescription(1, 0);
colordesc.Usage = Direct3D11.ResourceUsage.Default;
colordesc.OptionFlags = Direct3D11.ResourceOptionFlags.Shared;
colordesc.CpuAccessFlags = Direct3D11.CpuAccessFlags.None;
colordesc.ArraySize = 1;

Direct3D11.Texture2DDescription depthdesc = new Direct3D11.Texture2DDescription();
depthdesc.BindFlags = Direct3D11.BindFlags.DepthStencil;
depthdesc.Format = DXGI.Format.D32_Float_S8X24_UInt;
depthdesc.Width = WindowWidth;
depthdesc.Height = WindowHeight;
depthdesc.MipLevels = 1;
depthdesc.SampleDescription = new DXGI.SampleDescription(1, 0);
depthdesc.Usage = Direct3D11.ResourceUsage.Default;
depthdesc.OptionFlags = Direct3D11.ResourceOptionFlags.None;
depthdesc.CpuAccessFlags = Direct3D11.CpuAccessFlags.None;
depthdesc.ArraySize = 1;

Direct3D11.Texture2DDescription testdesc = new Direct3D11.Texture2DDescription();
testdesc.BindFlags = Direct3D11.BindFlags.RenderTarget | Direct3D11.BindFlags.ShaderResource;
testdesc.Format = DXGI.Format.B8G8R8A8_UNorm;
testdesc.Width = WindowWidth;
testdesc.Height = WindowHeight;
testdesc.MipLevels = 1;
testdesc.SampleDescription = new DXGI.SampleDescription(1, 0);
testdesc.Usage = Direct3D11.ResourceUsage.Default;
testdesc.OptionFlags = Direct3D11.ResourceOptionFlags.KeyedMutex;
testdesc.CpuAccessFlags = Direct3D11.CpuAccessFlags.None;
testdesc.ArraySize = 1;

using (var bytecode = D3DCompiler.ShaderBytecode.CompileFromFile(@"Shaders\MiniTri.fx", "fx_5_0", D3DCompiler.ShaderFlags.Debug, D3DCompiler.EffectFlags.None))
{
SampleEffect = new Direct3D11.Effect(D3DDevice, bytecode);
}

SharedTexture = new Direct3D11.Texture2D(D3DDevice, colordesc);
DepthTexture = new Direct3D11.Texture2D(D3DDevice, depthdesc);
//new texture to use for sharing if we can get the mutex issue resolved
TestTexture = new Direct3D11.Texture2D(D3DDevice, testdesc);
SampleRenderView = new Direct3D11.RenderTargetView(D3DDevice, SharedTexture);
SampleDepthView = new Direct3D11.DepthStencilView(D3DDevice, DepthTexture);

//new mutex defined as private class field
mutex = new DXGI.KeyedMutex(TestTexture);

//SampleEffect = Direct3D11.Effect.(D3DDevice, "MiniTri.fx", "fx_4_0");
Direct3D11.EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0); ;
Direct3D11.EffectPass pass = technique.GetPassByIndex(0);
SampleLayout = new Direct3D11.InputLayout(D3DDevice, pass.Description.Signature, new[] {
new Direct3D11.InputElement("POSITION", 0, DXGI.Format.R32G32B32A32_Float, 0, 0),
new Direct3D11.InputElement("COLOR", 0, DXGI.Format.R32G32B32A32_Float, 16, 0)
});

SampleStream = new DataStream(3 * 32, true, true);
SampleStream.WriteRange(new[] {
new Vector4(0.0f, 0.5f, 0.5f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4(0.5f, -0.5f, 0.5f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-0.5f, -0.5f, 0.5f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f)
});
SampleStream.Position = 0;

SampleVertices = new Direct3D11.Buffer(D3DDevice, SampleStream, new Direct3D11.BufferDescription()
{
BindFlags = Direct3D11.BindFlags.VertexBuffer,
CpuAccessFlags = Direct3D11.CpuAccessFlags.None,
OptionFlags = Direct3D11.ResourceOptionFlags.None,
SizeInBytes = 3 * 32,
Usage = Direct3D11.ResourceUsage.Default
});

D3DDevice.ImmediateContext.Flush();
}
[/source]

Rendering
[source lang="csharp"]
public void Render(int arg)
{
D3DDevice.ImmediateContext.OutputMerger.SetTargets(SampleDepthView, SampleRenderView);
D3DDevice.ImmediateContext.Rasterizer.SetViewports(new Direct3D11.Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f));

D3DDevice.ImmediateContext.ClearDepthStencilView(SampleDepthView, Direct3D11.DepthStencilClearFlags.Depth | Direct3D11.DepthStencilClearFlags.Stencil, 1.0f, 0);
float c = ((float)(arg % 1000)) / 999.0f;
D3DDevice.ImmediateContext.ClearRenderTargetView(SampleRenderView, new SlimDX.Color4(1.0f, c, c, c));

D3DDevice.ImmediateContext.InputAssembler.InputLayout = SampleLayout;
D3DDevice.ImmediateContext.InputAssembler.PrimitiveTopology = Direct3D11.PrimitiveTopology.TriangleList;
D3DDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new Direct3D11.VertexBufferBinding(SampleVertices, 32, 0));

mutex.Acquire(0, int.MaxValue);
mutex.Release(0);

Direct3D11.EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0);
Direct3D11.EffectPass pass = technique.GetPassByIndex(0);

for (int i = 0; i < technique.Description.PassCount; ++i)
{
pass.Apply(D3DDevice.ImmediateContext);
D3DDevice.ImmediateContext.Draw(3, 0);
}

D3DDevice.ImmediateContext.Flush();
}
[/source]

Any ideas?

Share this post


Link to post
Share on other sites
Advertisement
I tried the sample again on a different machine using feature level 11, and the problem persists, although it is much less noticeable (losing maybe 10% of frames).

Is there any reason why this should not conceptually work? Any suggestions on how I can synchronize DX11 and Direct2D some other way?

Edit: Let me clarify the usage scenario here...

According to [url="http://www.gamedev.net/topic/547920-how-to-use-d2d-with-d3d11/"]this[/url] thread, the keyedmutex can be used to synchronize the access to a shared resource so that you can do the following:

mutexD3D10.Acquire(0, int.MaxValue);
// render direct2D stuff here
mutexD3D10.Release(1);

mutexD3D11.Acquire(1, int.MaxValue);
// render shared resource to 3D scene with direct3d 11
mutexD3D11.Release(0);

I have this working flawlessly in a standard winforms hosted 3D application. I am now porting my code to be used in a WPF D3DImage, and, as happens above, once the mutex is acquired on the Direct3D 11 resource, everything drawn afterwards in the frame prior to a device flush is randomly not drawn. In my case, it means my 2D overlay rapidly blinks overtop my 3D scene.

Share this post


Link to post
Share on other sites
I think we'll need more code to work out what the problem is. The code blocks from the first post don't show the usage scenario that you describe in the second post. Otherwise, it sounds like you have the right approach in mind. Be sure that the keyed mutex stuff is used throughout all usages of the shared keyed mutex resource:


create keyed mutex shared rendertarget in D3D11 or perhaps.
get D2D handle from the above render target.
get mutex for both versions of the render target



mutex10.acquire(0);
// D2D or other device rendering
mutex10.release(1);

mutex11.acquire(1);
// D3D11 rendering
// do one of the following
// 1. use as shader resource
// 2. copy to swapchain back buffer
// 3. call present if this is the swapchain's buffer
mutex11.release(0);

I don't know if you're having both render to the same shared render target or if one is rendering to it and the other is using it as a shader resource for drawing. Either way, provided that the resource always protected when being used there shouldn't be any sync issues as you're describing.

Share this post


Link to post
Share on other sites
The code in the first post is just a minimal reproduction of the issue. The actual usage scenario is a shared texture between a Direct3D 11 device and a Direct3D 10.1 device (created by Direct3D 11 and shared to Direct3D 10 via DXGI resource, exactly as presented in [url="http://www.gamedev.net/topic/547920-how-to-use-d2d-with-d3d11/"]this[/url] thread). And then, after the 3D scene is rendered to the color buffer by Direct3D 11, I basically do what you have outlined, except I am not using a swap chain in this instance. That is because of the D3DImage interop, and sharing the color buffer with a Direct3D9Ex device, as per the WpfSample10, instead of presenting it. In my existing renderer based on a winform, everything works splendidly. I am under the assumption that even the code presented in the first post should "just work" and not have any issues.

So basically, if you look at the code in the first post, all I am doing is creating a texture with the keyed mutex resource option and a keyed mutex based on the texture resource. During the render call, I am clearing the color and depth buffers (in this case, both are textures/views, as the color buffer is shared with Direct3D9) and setting the input assembler data. Then I am simply acquiring and releasing the mutex before making the draw call, and then the device is flushed. Once the render method is complete, the D3DImage backbuffer is invalidated (this is part of the unmodified WpfSample10 code). During this process, I am doing nothing with the texture belonging to the mutex, simply acquiring and releasing the mutex. However, this has the effect shown in the videos in the first post, causing the drawn geometry to be missing from frames.

I will post the complete modified sample files below.

D3DImageSlimDX.cs
[source lang="csharp]
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;
using System.Windows;
using System.Windows.Interop;

using SlimDX.Direct3D9;

namespace DirectX11WpfTest
{
class D3DImageSlimDX : D3DImage, IDisposable
{
[DllImport("user32.dll", SetLastError = false)]
static extern IntPtr GetDesktopWindow();

static int NumActiveImages = 0;
static Direct3DEx D3DContext;
static DeviceEx D3DDevice;

Texture SharedTexture;

public D3DImageSlimDX()
{
InitD3D9();
NumActiveImages++;
}

public void Dispose()
{
SetBackBufferSlimDX(null);
if (SharedTexture != null)
{
SharedTexture.Dispose();
SharedTexture = null;
}

NumActiveImages--;
ShutdownD3D9();
}

public void InvalidateD3DImage()
{
if (SharedTexture != null)
{
Lock();
AddDirtyRect(new Int32Rect(0, 0, PixelWidth, PixelHeight));
Unlock();
}
}

public void SetBackBufferSlimDX(SlimDX.Direct3D11.Texture2D Texture)
{
if (SharedTexture != null)
{
SharedTexture.Dispose();
SharedTexture = null;
}

if (Texture == null)
{
if (SharedTexture != null)
{
SharedTexture = null;
Lock();
SetBackBuffer(D3DResourceType.IDirect3DSurface9, IntPtr.Zero);
Unlock();
}
}
else if (IsShareable(Texture))
{
Format format = TranslateFormat(Texture);
if (format == Format.Unknown)
throw new ArgumentException("Texture format is not compatible with OpenSharedResource");

IntPtr Handle = GetSharedHandle(Texture);
if (Handle == IntPtr.Zero)
throw new ArgumentNullException("Handle");

SharedTexture = new Texture(D3DDevice, Texture.Description.Width, Texture.Description.Height, 1, Usage.RenderTarget, format, Pool.Default, ref Handle);
using (Surface Surface = SharedTexture.GetSurfaceLevel(0))
{
Lock();
SetBackBuffer(D3DResourceType.IDirect3DSurface9, Surface.ComPointer);
Unlock();
}
}
else
throw new ArgumentException("Texture must be created with ResourceOptionFlags.Shared");
}

void InitD3D9()
{
if (NumActiveImages == 0)
{
D3DContext = new Direct3DEx();

PresentParameters presentparams = new PresentParameters();
presentparams.Windowed = true;
presentparams.SwapEffect = SwapEffect.Discard;
presentparams.DeviceWindowHandle = GetDesktopWindow();
presentparams.PresentationInterval = PresentInterval.Immediate;

D3DDevice = new DeviceEx(D3DContext, 0, DeviceType.Hardware, IntPtr.Zero, CreateFlags.HardwareVertexProcessing | CreateFlags.Multithreaded | CreateFlags.FpuPreserve, presentparams);
}
}

void ShutdownD3D9()
{
if (NumActiveImages == 0)
{
if (SharedTexture != null)
{
SharedTexture.Dispose();
SharedTexture = null;
}

if (D3DDevice != null)
{
D3DDevice.Dispose();
D3DDevice = null;
}

if (D3DContext != null)
{
D3DContext.Dispose();
D3DContext = null;
}
}
}

IntPtr GetSharedHandle(SlimDX.Direct3D11.Texture2D Texture)
{
SlimDX.DXGI.Resource resource = new SlimDX.DXGI.Resource(Texture);
IntPtr result = resource.SharedHandle;

resource.Dispose();

return result;
}

Format TranslateFormat(SlimDX.Direct3D11.Texture2D Texture)
{
switch (Texture.Description.Format)
{
case SlimDX.DXGI.Format.R10G10B10A2_UNorm:
return SlimDX.Direct3D9.Format.A2B10G10R10;

case SlimDX.DXGI.Format.R16G16B16A16_Float:
return SlimDX.Direct3D9.Format.A16B16G16R16F;

case SlimDX.DXGI.Format.B8G8R8A8_UNorm:
return SlimDX.Direct3D9.Format.A8R8G8B8;

default:
return SlimDX.Direct3D9.Format.Unknown;
}
}

bool IsShareable(SlimDX.Direct3D11.Texture2D Texture)
{
return (Texture.Description.OptionFlags & SlimDX.Direct3D11.ResourceOptionFlags.Shared) != 0;
}
}
}
[/source]

Scene.cs
[source lang="csharp"]
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

using SlimDX;
using DXGI = SlimDX.DXGI;
using Direct3D11 = SlimDX.Direct3D11;
using D3DCompiler = SlimDX.D3DCompiler;

namespace DirectX11WpfTest
{
class Scene : IDisposable
{
Direct3D11.Device D3DDevice;
DataStream SampleStream;
Direct3D11.InputLayout SampleLayout;
Direct3D11.Buffer SampleVertices;
Direct3D11.RenderTargetView SampleRenderView;
Direct3D11.DepthStencilView SampleDepthView;
Direct3D11.Effect SampleEffect;
Direct3D11.Texture2D DepthTexture;
int WindowWidth;
int WindowHeight;

private Direct3D11.Texture2D TestTexture;
private DXGI.KeyedMutex mutex;

public Direct3D11.Texture2D SharedTexture
{
get;
set;
}

public Scene()
{
WindowWidth = 100;
WindowHeight = 100;
InitD3D();
}

public void Dispose()
{
DestroyD3D();
}

public void Render(int arg)
{
D3DDevice.ImmediateContext.OutputMerger.SetTargets(SampleDepthView, SampleRenderView);
D3DDevice.ImmediateContext.Rasterizer.SetViewports(new Direct3D11.Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f));

D3DDevice.ImmediateContext.ClearDepthStencilView(SampleDepthView, Direct3D11.DepthStencilClearFlags.Depth | Direct3D11.DepthStencilClearFlags.Stencil, 1.0f, 0);
float c = ((float)(arg % 1000)) / 999.0f;
D3DDevice.ImmediateContext.ClearRenderTargetView(SampleRenderView, new SlimDX.Color4(1.0f, c, c, c));

D3DDevice.ImmediateContext.InputAssembler.InputLayout = SampleLayout;
D3DDevice.ImmediateContext.InputAssembler.PrimitiveTopology = Direct3D11.PrimitiveTopology.TriangleList;
D3DDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new Direct3D11.VertexBufferBinding(SampleVertices, 32, 0));

mutex.Acquire(0, int.MaxValue);
mutex.Release(0);

Direct3D11.EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0);
Direct3D11.EffectPass pass = technique.GetPassByIndex(0);

for (int i = 0; i < technique.Description.PassCount; ++i)
{
pass.Apply(D3DDevice.ImmediateContext);
D3DDevice.ImmediateContext.Draw(3, 0);
}

D3DDevice.ImmediateContext.Flush();
}

void InitD3D()
{
D3DDevice = new Direct3D11.Device(Direct3D11.DriverType.Hardware, Direct3D11.DeviceCreationFlags.Debug | Direct3D11.DeviceCreationFlags.BgraSupport, Direct3D11.FeatureLevel.Level_10_1);

Direct3D11.Texture2DDescription colordesc = new Direct3D11.Texture2DDescription();
colordesc.BindFlags = Direct3D11.BindFlags.RenderTarget | Direct3D11.BindFlags.ShaderResource;
colordesc.Format = DXGI.Format.B8G8R8A8_UNorm;
colordesc.Width = WindowWidth;
colordesc.Height = WindowHeight;
colordesc.MipLevels = 1;
colordesc.SampleDescription = new DXGI.SampleDescription(1, 0);
colordesc.Usage = Direct3D11.ResourceUsage.Default;
colordesc.OptionFlags = Direct3D11.ResourceOptionFlags.Shared;
colordesc.CpuAccessFlags = Direct3D11.CpuAccessFlags.None;
colordesc.ArraySize = 1;

Direct3D11.Texture2DDescription depthdesc = new Direct3D11.Texture2DDescription();
depthdesc.BindFlags = Direct3D11.BindFlags.DepthStencil;
depthdesc.Format = DXGI.Format.D32_Float_S8X24_UInt;
depthdesc.Width = WindowWidth;
depthdesc.Height = WindowHeight;
depthdesc.MipLevels = 1;
depthdesc.SampleDescription = new DXGI.SampleDescription(1, 0);
depthdesc.Usage = Direct3D11.ResourceUsage.Default;
depthdesc.OptionFlags = Direct3D11.ResourceOptionFlags.None;
depthdesc.CpuAccessFlags = Direct3D11.CpuAccessFlags.None;
depthdesc.ArraySize = 1;

Direct3D11.Texture2DDescription testdesc = new Direct3D11.Texture2DDescription();
testdesc.BindFlags = Direct3D11.BindFlags.RenderTarget | Direct3D11.BindFlags.ShaderResource;
testdesc.Format = DXGI.Format.B8G8R8A8_UNorm;
testdesc.Width = WindowWidth;
testdesc.Height = WindowHeight;
testdesc.MipLevels = 1;
testdesc.SampleDescription = new DXGI.SampleDescription(1, 0);
testdesc.Usage = Direct3D11.ResourceUsage.Default;
testdesc.OptionFlags = Direct3D11.ResourceOptionFlags.KeyedMutex;
testdesc.CpuAccessFlags = Direct3D11.CpuAccessFlags.None;
testdesc.ArraySize = 1;

using (var bytecode = D3DCompiler.ShaderBytecode.CompileFromFile(@"Shaders\MiniTri.fx", "fx_5_0", D3DCompiler.ShaderFlags.Debug, D3DCompiler.EffectFlags.None))
{
SampleEffect = new Direct3D11.Effect(D3DDevice, bytecode);
}

SharedTexture = new Direct3D11.Texture2D(D3DDevice, colordesc);
DepthTexture = new Direct3D11.Texture2D(D3DDevice, depthdesc);
TestTexture = new Direct3D11.Texture2D(D3DDevice, testdesc);
SampleRenderView = new Direct3D11.RenderTargetView(D3DDevice, SharedTexture);
SampleDepthView = new Direct3D11.DepthStencilView(D3DDevice, DepthTexture);

mutex = new DXGI.KeyedMutex(TestTexture);

//SampleEffect = Direct3D11.Effect.(D3DDevice, "MiniTri.fx", "fx_4_0");
Direct3D11.EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0); ;
Direct3D11.EffectPass pass = technique.GetPassByIndex(0);
SampleLayout = new Direct3D11.InputLayout(D3DDevice, pass.Description.Signature, new[] {
new Direct3D11.InputElement("POSITION", 0, DXGI.Format.R32G32B32A32_Float, 0, 0),
new Direct3D11.InputElement("COLOR", 0, DXGI.Format.R32G32B32A32_Float, 16, 0)
});

SampleStream = new DataStream(3 * 32, true, true);
SampleStream.WriteRange(new[] {
new Vector4(0.0f, 0.5f, 0.5f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4(0.5f, -0.5f, 0.5f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-0.5f, -0.5f, 0.5f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f)
});
SampleStream.Position = 0;

SampleVertices = new Direct3D11.Buffer(D3DDevice, SampleStream, new Direct3D11.BufferDescription()
{
BindFlags = Direct3D11.BindFlags.VertexBuffer,
CpuAccessFlags = Direct3D11.CpuAccessFlags.None,
OptionFlags = Direct3D11.ResourceOptionFlags.None,
SizeInBytes = 3 * 32,
Usage = Direct3D11.ResourceUsage.Default
});

D3DDevice.ImmediateContext.Flush();
}

void DestroyD3D()
{
if (SampleVertices != null)
{
SampleVertices.Dispose();
SampleVertices = null;
}

if (SampleLayout != null)
{
SampleLayout.Dispose();
SampleLayout = null;
}

if (SampleEffect != null)
{
SampleEffect.Dispose();
SampleEffect = null;
}

if (SampleRenderView != null)
{
SampleRenderView.Dispose();
SampleRenderView = null;
}

if (SampleDepthView != null)
{
SampleDepthView.Dispose();
SampleDepthView = null;
}

if (SampleStream != null)
{
SampleStream.Dispose();
SampleStream = null;
}

if (SampleLayout != null)
{
SampleLayout.Dispose();
SampleLayout = null;
}

if (SharedTexture != null)
{
SharedTexture.Dispose();
SharedTexture = null;
}

if (TestTexture != null)
{
TestTexture.Dispose();
TestTexture = null;
}

if (DepthTexture != null)
{
DepthTexture.Dispose();
DepthTexture = null;
}

if (mutex != null)
{
mutex.Dispose();
mutex = null;
}

if (D3DDevice != null)
{
D3DDevice.Dispose();
D3DDevice = null;
}
}
}
}
[/source]

Codebehind for MainWindow.xaml
[source lang="csharp"]
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;

namespace DirectX11WpfTest
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
timer = new Stopwatch();
Loaded += Window_Loaded;
Closing += Window_Closing;
InitializeComponent();
}

private Stopwatch timer;
private Scene scene;
private D3DImageSlimDX slimDXImage;

private void Window_Loaded(object sender, RoutedEventArgs e)
{
slimDXImage = new D3DImageSlimDX();
slimDXImage.IsFrontBufferAvailableChanged += OnIsFrontBufferAvailableChanged;

SlimDXImage.Source = slimDXImage;

scene = new Scene();

slimDXImage.SetBackBufferSlimDX(scene.SharedTexture);
BeginRenderingScene();
}

private void Window_Closing(object sender, CancelEventArgs e)
{
if (slimDXImage != null)
{
slimDXImage.Dispose();
slimDXImage = null;
}

if (scene != null)
{
scene.Dispose();
scene = null;
}
}

void OnRendering(object sender, EventArgs e)
{
scene.Render(timer.Elapsed.Milliseconds);
slimDXImage.InvalidateD3DImage();
}

void BeginRenderingScene()
{
if (slimDXImage.IsFrontBufferAvailable)
{
foreach (var item in SlimDX.ObjectTable.Objects)
{
}

SlimDX.Direct3D11.Texture2D Texture = scene.SharedTexture;
slimDXImage.SetBackBufferSlimDX(Texture);
CompositionTarget.Rendering += OnRendering;

timer.Start();
}
}

void StopRenderingScene()
{
timer.Stop();
CompositionTarget.Rendering -= OnRendering;
}

void OnIsFrontBufferAvailableChanged(object sender, DependencyPropertyChangedEventArgs e)
{
// This fires when the screensaver kicks in, the machine goes into sleep or hibernate
// and any other catastrophic losses of the d3d device from WPF's point of view
if (slimDXImage.IsFrontBufferAvailable)
{
BeginRenderingScene();
}
else
{
StopRenderingScene();
}
}
}
}
[/source]

Share this post


Link to post
Share on other sites
So I'm not sure about this approach to using D3D11 with DX9. It looks as though the DX11 call to flush is expected to block and therefore enforce synchronization where all DX11 content will be rendered before proceeding. However, as described [url="http://msdn.microsoft.com/en-us/library/ff476425(v=VS.85).aspx"]here[/url] the call to flush is asynchronous. It may or may not return before all rendering is actually done. It looks like the lock/release on the other resource has enough overhead to cause flush to return before the triangle is drawn. When that happens, a small percentage of the time, D3D9 presents the incomplete frame.

The main trouble is that the shared resource on the DX11 size has no locking mechanism to ensure that it completes before dx9 grabs the data. And since flush doesn't block until rendering is guaranteed to be complete your left with having to poll the device with a query instead, which will eat time. This is a good example for why the keyed mutex approach was added.


Share this post


Link to post
Share on other sites
[quote name='DieterVW' timestamp='1310754700' post='4835754']
So I'm not sure about this approach to using D3D11 with DX9. It looks as though the DX11 call to flush is expected to block and therefore enforce synchronization where all DX11 content will be rendered before proceeding. However, as described [url="http://msdn.microsoft.com/en-us/library/ff476425%28v=VS.85%29.aspx"]here[/url] the call to flush is asynchronous. It may or may not return before all rendering is actually done. It looks like the lock/release on the other resource has enough overhead to cause flush to return before the triangle is drawn. When that happens, a small percentage of the time, D3D9 presents the incomplete frame.

The main trouble is that the shared resource on the DX11 size has no locking mechanism to ensure that it completes before dx9 grabs the data. And since flush doesn't block until rendering is guaranteed to be complete your left with having to poll the device with a query instead, which will eat time. This is a good example for why the keyed mutex approach was added.
[/quote]

I was starting to arrive at a similar conclusion when I was typing the last post, but I was not sure if the flush command was synchronous or not. However, the problem is that I can render a pretty complex scene and have the full results displayed without any problematic frames as long as I don't include the mutex code. Also, this method (minus the mutex acquire and release) seems to be a pretty popular way of integrating DX10+ with D3DImage and I have not seen anything previously about synchronization issues.

Do you have a recommendation on how I might be able to force synchronization? You mentioned polling the device with a query, but I have never had to do this before, so any advice on how to get started?

Share this post


Link to post
Share on other sites
My speculation for the flush command is that the runtime or drive may see the acquire/release as being very costly and so return early rather than blocking the caller for such a long period of time. Other basic commands may not suffer from this. I honestly couldn't say for sure, but the documentation does say it's not a reliable mechanism. Perhaps in practice most people find it to be reliable, but it'll not be guaranteed, even between release or patches.


The query is just a simple object that can indicate when a command has finished. You'll create an ID3D11Query object using the ID3D11:CreateQuery method and use a description with the D3D11_QUERY_EVENT flag. Then in the code you will call ID3D11DeviceContext::End() with the query object once the drawing of all D3D11 context is complete. Now, before DX9 can continue with it's work, you'll have to continue to check ID3D11DeviceContext::GetData() until a result of TRUE is returned. This function will return S_OK and the out data which is a bool should read TRUE. If the out value is false it implies that rendering has not completed. This call let's you check the status immediately, or it can block and flush the pipeline until the result will return TRUE. TRUE indicates that the pipeline has completed the query and all prior submitted D3D commands.

Spinning on this will be costly and could drive CPU usage to 100% so I recommend either making the blocking call or having the thread do some other work and checking the result periodically.

Share this post


Link to post
Share on other sites
I added the following code to the end of the render method (not concerned about dealing with the effects of spinning until I solve this issue). I assume this is how GetData should be used in SlimDX. The problem still persists.


[source lang="csharp"]

Query query = new Query(D3DDevice, new QueryDescription
{
Flags = QueryFlags.None,
Type = QueryType.Event
});

D3DDevice.ImmediateContext.End(query);

while (!D3DDevice.ImmediateContext.GetData<bool>(query))
{
System.Diagnostics.Debug.WriteLine("Waiting...");
}

query.Dispose();

[/source]



Share this post


Link to post
Share on other sites
[quote name='Mike.Popoloski' timestamp='1310923165' post='4836422']
You need to add the Begin(query) command as well before you start drawing. After making those modifications, the flickering seems to go away for me.
[/quote]


Would you mind posting your render method? I put the Begin(query) in mine as well, and I still have flickering. Strangely, if I change the conditional on the spinning while loop, it seems to diminish the flickering.

Share this post


Link to post
Share on other sites
Hidden
[quote name='arbitus' timestamp='1310934779' post='4836480']<br>[quote name='Mike.Popoloski' timestamp='1310923165' post='4836422']<br>You need to add the Begin(query) command as well before you start drawing. After making those modifications, the flickering seems to go away for me.<br>[/quote]<br><br><br>Would you mind posting your render method? I put the Begin(query) in mine as well, and I still have flickering. Strangely, if I change the conditional on the spinning while loop, it seems to diminish the flickering.<br>[/quote]
<pre style="font-family:Courier New;font-size:13;color:black;background:white;"><span style="color:blue;"></span>[code]<br>public void Render(int arg)<br>{<br> var query = new Query(actualDevice, new QueryDescription(QueryType.Event, QueryFlags.None));<br><br> D3DDevice.ClearDepthStencilView(SampleDepthView, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);<br> float c = ((float)(arg % 1000)) / 999.0f;<br> D3DDevice.ClearRenderTargetView(SampleRenderView, new SlimDX.Color4(1.0f, c, c, c));<br><br> D3DDevice.Begin(query);<br><br> D3DDevice.InputAssembler.InputLayout = SampleLayout;<br> D3DDevice.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;<br> D3DDevice.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(SampleVertices, 32, 0));<br><br><br> mutex.Acquire(0, int.MaxValue);<br> mutex.Release(0);<br><br> EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0);<br> EffectPass pass = technique.GetPassByIndex(0);<br><br> for (int i = 0; i &lt; technique.Description.PassCount; ++i)<br> {<br> pass.Apply(D3DDevice);<br> D3DDevice.Draw(3, 0);<br> }<br><br> D3DDevice.End(query);<br><br> while (!D3DDevice.GetData&lt;bool&gt;(query))<br> System.Diagnostics.Debug.WriteLine("Hello!");<br><br> query.Dispose();<br>}<br>[/code]<br></pre>

Share this post


Link to post
[quote name='arbitus' timestamp='1310934779' post='4836480']
[quote name='Mike.Popoloski' timestamp='1310923165' post='4836422']
You need to add the Begin(query) command as well before you start drawing. After making those modifications, the flickering seems to go away for me.
[/quote]


Would you mind posting your render method? I put the Begin(query) in mine as well, and I still have flickering. Strangely, if I change the conditional on the spinning while loop, it seems to diminish the flickering.
[/quote]

[source lang="csharp"]
public void Render(int arg)
{
var query = new Query(actualDevice, new QueryDescription(QueryType.Event, QueryFlags.None));

D3DDevice.ClearDepthStencilView(SampleDepthView, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
float c = ((float)(arg % 1000)) / 999.0f;
D3DDevice.ClearRenderTargetView(SampleRenderView, new SlimDX.Color4(1.0f, c, c, c));

D3DDevice.Begin(query);

D3DDevice.InputAssembler.InputLayout = SampleLayout;
D3DDevice.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
D3DDevice.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(SampleVertices, 32, 0));


mutex.Acquire(0, int.MaxValue);
mutex.Release(0);

EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0);
EffectPass pass = technique.GetPassByIndex(0);

for (int i = 0; i < technique.Description.PassCount; ++i)
{
pass.Apply(D3DDevice);
D3DDevice.Draw(3, 0);
}

D3DDevice.End(query);

while (!D3DDevice.GetData<bool>(query))
System.Diagnostics.Debug.WriteLine("Hello!");

query.Dispose();
}
[/source]

Share this post


Link to post
Share on other sites
[quote name='arbitus' timestamp='1310942880' post='4836524']
Sigh, mine is still happily blinking. :angry:
[/quote]

Try zipping up your whole project and sending it to me. I'll try debugging it on another computer and see if the problem manifests there as well.

Share this post


Link to post
Share on other sites
After further investigations, I have decided to resurrect this thread with new information. My suspicions are that this incredibly contrived use case might be manifesting a bug.

I brought this to the attention of a DX team member, and he directed me [url="http://archive.msdn.microsoft.com/D3D9ExDXGISharedSurf"]here[/url]. I had already seen something similar [url="http://blogs.windowsclient.net/rob_relyea/archive/2010/04/30/gizmodo-posts-wpf-direct2d-sample-wow.aspx"]here[/url], integrating Diret2D and WPF. I dove into the SurfaceQueue code and found that it contained two synchronization mechanisms. One is for multithreaded enqueue/dequeue of surfaces, which does nothing in my use case, as I am simply producing a surface that is to be immediately consumed. The second is the render synchronization which is intended to force the frame to finish rendering synchronously with the enqueue of the surface.

The magic is in the enqueue method of the SurfaceQueue:

[source]
// Copy a small portion of the surface onto the staging surface
hr = m_pProducer->GetDevice()->CopySurface(pStagingResource, pSurface, width, height);
...
//
// Force rendering to complete by locking the staging resource.
//
if (FAILED(hr = m_pProducer->GetDevice()->LockSurface(pStagingResource, Flags)))
{
goto end;
}
if (FAILED(hr = m_pProducer->GetDevice()->UnlockSurface(pStagingResource)))
{
goto end;
}
ASSERT(QueueEntry.pStagingResource == NULL);
//
// The call to lock the surface completed succesfully meaning the surface if flushed
// and ready for dequeue. Mark the surface as such and add it to the fifo queue.
//
[/source]

In a nutshell, the SurfaceQueue copies a portion of the surface to be enqueued to a staging resource, and then (in the case of Direct3D10 and 11), maps and unmaps the resource into CPU space. In theory, this should force the rendering to the original surface to complete so that the staging resource will be up to date when it is (potentially) read in CPU space. This makes sense, so armed with this information, I added the following code to my previous example's render method:

[source lang="csharp"]
D3DDevice.ImmediateContext.CopyResource(SharedTexture, StagingTexture);
var data = D3DDevice.ImmediateContext.MapSubresource(StagingTexture, 0, StagingTexture.Description.Width * StagingTexture.Description.Height * sizeof(float), MapMode.Read, MapFlags.None);
D3DDevice.ImmediateContext.UnmapSubresource(StagingTexture, 0);
[/source]

and initialization:
[source lang="csharp"]
Texture2DDescription stagingdesc = new Texture2DDescription();
stagingdesc.BindFlags = BindFlags.None;
stagingdesc.Format = DXGI.Format.B8G8R8A8_UNorm;
stagingdesc.Width = WindowWidth;
stagingdesc.Height = WindowHeight;
stagingdesc.MipLevels = 1;
stagingdesc.SampleDescription = new DXGI.SampleDescription(1, 0);
stagingdesc.Usage = ResourceUsage.Staging;
stagingdesc.OptionFlags = ResourceOptionFlags.None;
stagingdesc.CpuAccessFlags = CpuAccessFlags.Read;
stagingdesc.ArraySize = 1;

StagingTexture = new Texture2D(D3DDevice, stagingdesc);
[/source]

Running the sample results in the same blinking behavior as before. In addition, i was playing around with the original Direct3D10 sample, and added Direct2D code to clear the render target using the SharedTexture surface, and that resulted in the behavior I had previously noted where the any 3D content before the Acquire call to the KeyedMutex would complete 100% of the time, but anything after that would blink. Except in this case, any Direct2D content drawn after the 3D triangle would blink, while the triangle was always present. So, I remade the current sample to add an additional triangle. Code follows:

Initialization:
[source lang="csharp"]
SampleStream1 = new DataStream(3 * 32, true, true);
SampleStream1.WriteRange(new[] {
new Vector4(0.25f, 0.5f, 0.5f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4(0.75f, -0.5f, 0.5f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-0.25f, -0.5f, 0.5f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f)
});
SampleStream1.Position = 0;

SampleVertices1 = new Buffer(D3DDevice, SampleStream1, new BufferDescription()
{
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 3 * 32,
Usage = ResourceUsage.Default
});
[/source]

And the new completed Render:
[source lang="csharp"]
D3DDevice.ImmediateContext.ClearDepthStencilView(SampleDepthView, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
float c = ((float)(arg % 1000)) / 999.0f;
D3DDevice.ImmediateContext.ClearRenderTargetView(SampleRenderView, new SlimDX.Color4(1.0f, c, c, c));

D3DDevice.ImmediateContext.InputAssembler.InputLayout = SampleLayout;
D3DDevice.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
D3DDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(SampleVertices, 32, 0));

EffectTechnique technique = SampleEffect.GetTechniqueByIndex(0);
EffectPass pass = technique.GetPassByIndex(0);

for (int i = 0; i < technique.Description.PassCount; ++i)
{
pass.Apply(D3DDevice.ImmediateContext);
D3DDevice.ImmediateContext.Draw(3, 0);
}

Mutex.Acquire(0, int.MaxValue);
Mutex.Release(0);

D3DDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(SampleVertices1, 32, 0));

for (int i = 0; i < technique.Description.PassCount; ++i)
{
pass.Apply(D3DDevice.ImmediateContext);
D3DDevice.ImmediateContext.Draw(3, 0);
}

D3DDevice.ImmediateContext.CopyResource(SharedTexture, StagingTexture);
var data = D3DDevice.ImmediateContext.MapSubresource(StagingTexture, 0, StagingTexture.Description.Width * StagingTexture.Description.Height * sizeof(float), MapMode.Read, MapFlags.None);

data.Data.Position = 3377 * 4;

var read1 = data.Data.ReadByte();
var read2 = data.Data.ReadByte();
var read3 = data.Data.ReadByte();
var read4 = data.Data.ReadByte();

System.Diagnostics.Debug.WriteLine("Actual1: " + read1 + " " + read2 + " " + read3 + " " + read4);

data.Data.Position = 3390 * 4;

read1 = data.Data.ReadByte();
read2 = data.Data.ReadByte();
read3 = data.Data.ReadByte();
read4 = data.Data.ReadByte();

System.Diagnostics.Debug.WriteLine("Actual2: " + read1 + " " + read2 + " " + read3 + " " + read4);
System.Diagnostics.Debug.WriteLine("");

D3DDevice.ImmediateContext.UnmapSubresource(StagingTexture, 0);
[/source]

First of all, I know this is stupid, but how can I find the pitch so that I can request the correct data size?
Aside from that, there are a few things to note here (it only takes a few minutes to set the sample up and it is interesting to say the least):
1) The second triangle will flicker, while the original triangle will remain solid. This is expected though not desired.
2) Those data readings represent the near tip of each triangle (although I am not sure if this is uniform across all hardware due to pitch). Every frame, my debug output reads thusly:
Actual1: 6 1 247 255
Actual2: 4 4 247 255

This means that every frame, by the end of the execution of the Render method, the staging surface clearly has the data showing that both triangles are drawn (I have also created additional tests sampling additional pixels to confirm this is absolutely true). I must assume that the original shared surface has the triangles drawn as well because the staging surface is a copy. Immediately following the render method is the synchronous lock the D3DImage version of the surface and its supposed copy to an image source (the bowels of the D3DImage). Yet this does not seem possible, because visually, some frames clearly do not have the second triangle visible, although tests show that every frame it is being produced prior to the D3DImage locking and copying it.

So if the map/unmap is forcing synchronization as the SurfaceQueue suggests (and my CPU read tests), I am at a loss as to why this does not work. If this were a synchronization issue with DX9Ex, why is the first triangle ALWAYS visible? Am I missing something?

Share this post


Link to post
Share on other sites
We changed MapSubresource in the current build of SlimDX to not require a size, since like you've realized knowing the pitch beforehand is pretty much impossible.

As for your actual issue, I don't really know what's going on there, and I was unable to make it work properly. Maybe it really is just broken for this use case.

Share this post


Link to post
Share on other sites
[quote name='Mike.Popoloski' timestamp='1311737935' post='4840939']
We changed MapSubresource in the current build of SlimDX to not require a size, since like you've realized knowing the pitch beforehand is pretty much impossible.
[/quote]

Well thanks for this at least, I was really scratching my head.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement