• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

stoneMcClane

Members
  • Content count

    11
  • Joined

  • Last visited

Community Reputation

114 Neutral

About stoneMcClane

  • Rank
    Member
  1. Thanks a lot for such a detailed answer. Yes, I forgot to mention that they are one dimensional resources which is why I'm using y=1, z=1 for the numthreads definition. I already figured that I might need to do something like that (dividing into thread groups), but I didn't really understand the concept in association to the hardware. Your explanation helps me a lot there    Indeed 1-2 Million is not that much when it comes to GPU computing ... just in relation to 216 (65535) it is quite a step which I didn't know how to take.   Thanks for the great help (+rep)
  2. Hi,   I'm currently trying to solve a parallel computation problem with the help of compute shaders. The problem that I'm facing is that the number of computed elements can be quite high (1-2 Million) and I am not entirely sure how to operate a compute shader on such big buffers to achieve a reasonable performance.   My shader setup basically looks like this... struct InputData { ... }; struct OutputData { ... }; StructuredBuffer<InputData> InputBuffer; RWStructuredBuffer<OutputData> OutputBuffer; OutputData CS_Calculation(InputData i) { // the computational workload happens here ... } [numthreads(1, 1, 1)] void CSMain(uint3 DTid : SV_DispatchThreadID) { // the number of elements in the input and output buffers is the same, // each element in the output buffer is computed by its matching element in the input buffer InputData input = InputBuffer[DTid.x]; OutputBuffer[DTid.x] = CS_Calculation(input); } For each element of the input buffer I want to run a calculation and store the result in the output buffer (both are structured buffers). As you see I address the elements in the buffers via the X-Component of the SV_DispatchThreadID semantic, but the problem with this is that I am then limited to a number of 65535 elements that the shader can process (see http://msdn.microsoft.com/en-us/library/windows/desktop/ff476405(v=vs.85).aspx), since I am calling it with parameters like follows... dc->Dispatch(element_count, 1, 1); What I figured is that I should somehow use a different combination of the [numthreads(x, y, z)] definition and the Dispatch(x, y, x) parameters to support higher element counts, but I'm unable figure out how to correctly get the index of the current input/output element in the shader if I do so.   I have already tried several ways to use the other available CS semantics (http://msdn.microsoft.com/en-us/library/windows/desktop/ff471566(v=vs.85).aspx), but I never got the wanted result. Most of the time, certain elements in the output buffer were not written at all.   Could someone please help me out how to correctly use a combination of the numthreads shader declaration, the Dispatch call and a corresponding CS shader semantic to achieve the linear indexing that I need.   Thanks
  3. Hi,   I've written my own file format in which I store mip-level slices of a texture in the order highest to lowest (the data is compressed with BC1).   Now I'm trying to initialize a D3D11 texture with the binary data from my file format. Updating only the first mip-level of the texture works without any problems, but what I'd like to do is to upload all mip-levels from the file to the GPU in one go.   Is this even possible ? Currently I'm using UpdateSubresource to upload the binary data. Is this maybe possible with a staging buffer and CopySubresourceRegion ?     Thanks for any hints
  4. Turns out that the only thing that was different between the original code and my SlimDX port was at fault for the issue [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img] To run the message loop for the application I simply used the SlimDX message pump on the first window... [CODE] MessagePump.Run(g_WindowObjects.First().Form, () => { RenderToAllMonitors(); PresentToAllMonitors(); }); [/CODE] ... but this seems to have interfered with something that DXGI is doing during the fullscreen transition of this window. I replaced the above code with a separate message loop (just like in the original sample code) and after that change it worked as expected. I've put the whole code of the port to my bitbucket account so if anyone happens to need something like that, feel free to go there and grab it. [url="https://bitbucket.org/stoneMcClane/slimdx_multimon11"]https://bitbucket.or...imdx_multimon11[/url]
  5. [size=5][Update][/size] As said I've now tried to create the windows in the exact same way the DX sample does it by using PInvoke for all necessary W32 calls ... the behaviour stays exactly the same as when using .NET Forms. So I guess the issue might be related to either SlimDX or the .NET runtime is doing something which causes problems ?! I'm kind of lost here right now, what else could I try to narrow the problem down ?
  6. Hi, after having tried to implement the feature to switch multiple .NET Form windows to fullscreen mode on different monitors simultaneously without success, I've now ported the DirectX SDK MultiMon10 sample to SlimDX to see if this is an issue caused by my code or if it could be caused by using Forms + DXGI or a bug in SlimDX. (I get the exact same behaviour for the ported sample code as it happens in my engine) Here's the code of the ported MultiMon10 sample, I'm using D3D11 here but I also tried it with D3D10, it didn't make a difference for me. I tried to resemble the original sample's code as close as possible, but only using C# & SlimDX ... the only thing that I added for debugging purposes is a Debug output message when the WM_SIZE message is dispatched to a window, since this is what is an indication for an ongoing fullscreen transition. The output for the original C++ DxSDK sample looks like follows on my 2 monitor workstation: [CODE] resize: #0 1680 x 1050 resize: #1 1920 x 1080 resize: #0 834 x 497 resize: #1 954 x 512 [/CODE] ... the first two resize messages are the fullscreen switch ... the other two are caused by closing the application and therefore leaving the fullscreen modes again. The debug messages for the C# / SlimDX port of the sample look like follows: [CODE] resize: #0 1680 x 1050 resize: #1 1920 x 1080 resize: #0 834 x 497 resize: #1 954 x 512 resize: #0 1680 x 1050 resize: #1 1920 x 1080 resize: #0 834 x 497 resize: #1 954 x 512 [/CODE] ... as you can see after the first fullscreen resize messages DXGI seems to fall back to windowed mode again and tried to switch to fullscreen for another time until it finally falls back to windowed and stays like that. My first suspicion was that this could be caused by some window focus events that could occur after the fullscreen mode has been entered, but I debugged this and it seems that the same happens for the C++ version as well, also I tried several methods to avoid the windows to gain/lose focus, but the issue stayed the same. Does anyone have an idea what could be causing this ? Could this be a problem of using .NET Forms instead of plain Win32 Windows ? Or should it work with .NET Forms no matter what, and this might be a SlimDX bug ? PS: I'll try to use PInvoke to create plain Win32 windows tomorrow to try confirm that this is related to .NET Forms, I'll update the status of my investigations tomorrow. Thanks for any hints in the meantime [CODE] using System; using System.Collections.Generic; using System.Drawing; using System.Linq; using System.Windows.Forms; using SlimDX; using SlimDX.DXGI; using SlimDX.Direct3D11; using SlimDX.Windows; using Device = SlimDX.Direct3D11.Device; using Resource = SlimDX.Direct3D11.Resource; namespace MultiMon11 { [Flags] enum WindowStyles : uint { WS_OVERLAPPED = 0x00000000, WS_POPUP = 0x80000000, WS_CHILD = 0x40000000, WS_MINIMIZE = 0x20000000, WS_VISIBLE = 0x10000000, WS_DISABLED = 0x08000000, WS_CLIPSIBLINGS = 0x04000000, WS_CLIPCHILDREN = 0x02000000, WS_MAXIMIZE = 0x01000000, WS_BORDER = 0x00800000, WS_DLGFRAME = 0x00400000, WS_VSCROLL = 0x00200000, WS_HSCROLL = 0x00100000, WS_SYSMENU = 0x00080000, WS_THICKFRAME = 0x00040000, WS_GROUP = 0x00020000, WS_TABSTOP = 0x00010000, WS_MINIMIZEBOX = 0x00020000, WS_MAXIMIZEBOX = 0x00010000, WS_CAPTION = WS_BORDER | WS_DLGFRAME, WS_TILED = WS_OVERLAPPED, WS_ICONIC = WS_MINIMIZE, WS_SIZEBOX = WS_THICKFRAME, WS_TILEDWINDOW = WS_OVERLAPPEDWINDOW, WS_OVERLAPPEDWINDOW = WS_OVERLAPPED | WS_CAPTION | WS_SYSMENU | WS_THICKFRAME | WS_MINIMIZEBOX | WS_MAXIMIZEBOX, WS_POPUPWINDOW = WS_POPUP | WS_BORDER | WS_SYSMENU, WS_CHILDWINDOW = WS_CHILD, //Extended Window Styles WS_EX_DLGMODALFRAME = 0x00000001, WS_EX_NOPARENTNOTIFY = 0x00000004, WS_EX_TOPMOST = 0x00000008, WS_EX_ACCEPTFILES = 0x00000010, WS_EX_TRANSPARENT = 0x00000020, //#if(WINVER >= 0x0400) WS_EX_MDICHILD = 0x00000040, WS_EX_TOOLWINDOW = 0x00000080, WS_EX_WINDOWEDGE = 0x00000100, WS_EX_CLIENTEDGE = 0x00000200, WS_EX_CONTEXTHELP = 0x00000400, WS_EX_RIGHT = 0x00001000, WS_EX_LEFT = 0x00000000, WS_EX_RTLREADING = 0x00002000, WS_EX_LTRREADING = 0x00000000, WS_EX_LEFTSCROLLBAR = 0x00004000, WS_EX_RIGHTSCROLLBAR = 0x00000000, WS_EX_CONTROLPARENT = 0x00010000, WS_EX_STATICEDGE = 0x00020000, WS_EX_APPWINDOW = 0x00040000, WS_EX_OVERLAPPEDWINDOW = (WS_EX_WINDOWEDGE | WS_EX_CLIENTEDGE), WS_EX_PALETTEWINDOW = (WS_EX_WINDOWEDGE | WS_EX_TOOLWINDOW | WS_EX_TOPMOST), //#endif /* WINVER >= 0x0400 */ //#if(WIN32WINNT >= 0x0500) WS_EX_LAYERED = 0x00080000, //#endif /* WIN32WINNT >= 0x0500 */ //#if(WINVER >= 0x0500) WS_EX_NOINHERITLAYOUT = 0x00100000, // Disable inheritence of mirroring by children WS_EX_LAYOUTRTL = 0x00400000, // Right to left mirroring //#endif /* WINVER >= 0x0500 */ //#if(WIN32WINNT >= 0x0500) WS_EX_COMPOSITED = 0x02000000, WS_EX_NOACTIVATE = 0x08000000 //#endif /* WIN32WINNT >= 0x0500 */ } class DEVICE_OBJECT { public int Ordinal; public Device pDevice; }; class WINDOW_OBJECT { public IntPtr hWnd; public Adapter pAdapter; public Output pOutput; public SwapChain pSwapChain; public DEVICE_OBJECT pDeviceObj; public RenderTargetView pRenderTargetView; public DepthStencilView pDepthStencilView; public int Width; public int Height; public AdapterDescription AdapterDesc; public OutputDescription OutputDesc; public FormBase Form; }; class ADAPTER_OBJECT { public Adapter pDXGIAdapter; public List<Output> DXGIOutputArray = new List<Output>(); }; public class FormBase : RenderForm { public const uint WM_SIZE = 0x0005; public static int InitialWidth; public static int InitialHeight; public static int InitialX; public static int InitialY; public FormBase() { WmSize = (f, lo, hi) => { for (int i = 0; i < Program.g_WindowObjects.Count; i++) { WINDOW_OBJECT pObj = Program.g_WindowObjects.ElementAt(i); if (Handle == pObj.hWnd) { // Cleanup the views pObj.pRenderTargetView.Dispose(); pObj.pDepthStencilView.Dispose(); //RECT rcCurrentClient; //GetClientRect(hWnd, &rcCurrentClient); Rectangle rcCurrentClient = ClientRectangle; SwapChainDescription Desc = pObj.pSwapChain.Description; pObj.pSwapChain.ResizeBuffers(Desc.BufferCount, rcCurrentClient.Right, // passing in 0 here will automatically calculate the size from the client rect rcCurrentClient.Bottom, // passing in 0 here will automatically calculate the size from the client rect Desc.ModeDescription.Format, SwapChainFlags.None); pObj.Width = rcCurrentClient.Right; pObj.Height = rcCurrentClient.Bottom; // recreate the views Program.CreateViewsForWindowObject(pObj); } } }; } protected override CreateParams CreateParams { get { CreateParams ret = base.CreateParams; ret.Style = (int)(WindowStyles.WS_OVERLAPPEDWINDOW & ~(WindowStyles.WS_MAXIMIZEBOX | WindowStyles.WS_MINIMIZEBOX | WindowStyles.WS_THICKFRAME)); ret.X = InitialX; ret.Y = InitialY; ret.Width = InitialWidth; ret.Height = InitialHeight; return ret; } } protected override void WndProc(ref Message m) { if (m.Msg == WM_SIZE && WmSize != null) { int lo = m.LParam.ToInt32() & 0xffff; int hi = m.LParam.ToInt32() >> 16; WmSize(this, lo, hi); System.Diagnostics.Debug.WriteLine("resize: #{0} {1} x {2}", WindowIndex, lo, hi); return; } base.WndProc(ref m); } public int WindowIndex; public Action<FormBase, int, int> WmSize { get; set; } } class Program { static void EnumerateAdapters() { int ac = g_pDXGIFactory.GetAdapterCount(); for (int i = 0; i < ac; i++) { ADAPTER_OBJECT pAdapterObj = new ADAPTER_OBJECT(); pAdapterObj.pDXGIAdapter = g_pDXGIFactory.GetAdapter(i); // get the description of the adapter AdapterDescription AdapterDesc = pAdapterObj.pDXGIAdapter.Description; // Enumerate outputs for this adapter EnumerateOutputs(pAdapterObj); // add the adapter to the list if (pAdapterObj.DXGIOutputArray.Count > 0) g_AdapterArray.Add(pAdapterObj); } } static List<ADAPTER_OBJECT> g_AdapterArray = new List<ADAPTER_OBJECT>(); static void EnumerateOutputs(ADAPTER_OBJECT pAdapterObj) { int oc = pAdapterObj.pDXGIAdapter.GetOutputCount(); for (int i = 0; i < oc; i++) { Output pOutput = pAdapterObj.pDXGIAdapter.GetOutput(i); // get the description OutputDescription OutputDesc = pOutput.Description; // only add outputs that are attached to the desktop // TODO: AttachedToDesktop seems to be always 0 /*if( !OutputDesc.AttachedToDesktop ) { pOutput.Release(); continue; }*/ pAdapterObj.DXGIOutputArray.Add(pOutput); } } private static bool g_bFullscreen = false; static void CreateMonitorWindows() { for (int a = 0; a < g_AdapterArray.Count; a++) { ADAPTER_OBJECT pAdapter = g_AdapterArray.ElementAt(a); for (int o = 0; o < pAdapter.DXGIOutputArray.Count; o++) { Output pOutput = pAdapter.DXGIOutputArray.ElementAt(o); OutputDescription OutputDesc = pOutput.Description; int X = OutputDesc.DesktopBounds.Left; int Y = OutputDesc.DesktopBounds.Top; int Width = OutputDesc.DesktopBounds.Right - X; int Height = OutputDesc.DesktopBounds.Bottom - Y; WINDOW_OBJECT pWindowObj = new WINDOW_OBJECT(); if (g_bFullscreen) { //pWindowObj.hWnd = CreateWindow( g_szWindowClass, // g_szWindowedName, // WS_POPUP, // X, // Y, // Width, // Height, // NULL, // 0, // g_WindowClass.hInstance, // NULL ); } else { X += 100; Y += 100; Width /= 2; Height /= 2; FormBase.InitialX = X; FormBase.InitialY = Y; FormBase.InitialWidth = Width; FormBase.InitialHeight = Height; FormBase form = new FormBase(); form.WindowIndex = o; form.Show(); pWindowObj.hWnd = form.Handle; pWindowObj.Form = form; //DWORD dwStyle = WS_OVERLAPPEDWINDOW & ~( WS_MAXIMIZEBOX | WS_MINIMIZEBOX | WS_THICKFRAME ); //pWindowObj.hWnd = CreateWindow( g_szWindowClass, // g_szWindowedName, // dwStyle, // X, // Y, // Width, // Height, // NULL, // 0, // g_WindowClass.hInstance, // NULL ); } // set width and height pWindowObj.Width = Width; pWindowObj.Height = Height; // add this to the window object array g_WindowObjects.Add(pWindowObj); } } } static List<DEVICE_OBJECT> g_DeviceArray = new List<DEVICE_OBJECT>(); public static List<WINDOW_OBJECT> g_WindowObjects = new List<WINDOW_OBJECT>(); static void CreateDevicePerAdapter(DriverType DriverType = DriverType.Hardware) { int iWindowObj = 0; for (int a = 0; a < g_AdapterArray.Count; a++) { ADAPTER_OBJECT pAdapterObj = g_AdapterArray.ElementAt(a); Adapter pAdapter = null; if (DriverType.Hardware == DriverType) pAdapter = pAdapterObj.pDXGIAdapter; // Create a device for this Device pd3dDevice = new Device(pAdapter, DeviceCreationFlags.Debug); DEVICE_OBJECT pDeviceObj = new DEVICE_OBJECT(); pDeviceObj.pDevice = pd3dDevice; // add the device pDeviceObj.Ordinal = g_DeviceArray.Count; g_DeviceArray.Add(pDeviceObj); // Init stuff needed for the device //OnD3D10CreateDevice( pDeviceObj ); // go through the outputs and set the device, adapter, and output for (int o = 0; o < pAdapterObj.DXGIOutputArray.Count; o++) { Output pOutput = pAdapterObj.DXGIOutputArray.ElementAt(o); WINDOW_OBJECT pWindowObj = g_WindowObjects.ElementAt(iWindowObj); pWindowObj.pDeviceObj = pDeviceObj; pWindowObj.pAdapter = pAdapter; pWindowObj.pOutput = pOutput; iWindowObj++; } } } static void CreateSwapChainPerOutput() { for (int i = 0; i < g_WindowObjects.Count; i++) { WINDOW_OBJECT pWindowObj = g_WindowObjects.ElementAt(i); // get the dxgi device SlimDX.DXGI.Device pDXGIDevice = new SlimDX.DXGI.Device(pWindowObj.pDeviceObj.pDevice); // create a swap chain SwapChainDescription SwapChainDesc = new SwapChainDescription(); ModeDescription BufferDesc = new ModeDescription(); BufferDesc.Width = pWindowObj.Width; BufferDesc.Height = pWindowObj.Height; BufferDesc.RefreshRate = new Rational(60, 1); BufferDesc.Format = Format.R8G8B8A8_UNorm; BufferDesc.ScanlineOrdering = DisplayModeScanlineOrdering.Unspecified; BufferDesc.Scaling = DisplayModeScaling.Unspecified; SwapChainDesc.ModeDescription = BufferDesc; SwapChainDesc.SampleDescription = new SampleDescription(1, 0); SwapChainDesc.Usage = Usage.RenderTargetOutput; SwapChainDesc.BufferCount = 3; SwapChainDesc.OutputHandle = pWindowObj.hWnd; SwapChainDesc.IsWindowed = (g_bFullscreen == false); SwapChainDesc.SwapEffect = SwapEffect.Discard; SwapChainDesc.Flags = SwapChainFlags.None; pWindowObj.pSwapChain = new SwapChain(g_pDXGIFactory, pDXGIDevice, SwapChainDesc); pDXGIDevice.Dispose(); pDXGIDevice = null; CreateViewsForWindowObject(pWindowObj); } } public static void CreateViewsForWindowObject(WINDOW_OBJECT pWindowObj) { // get the backbuffer Texture2D pBackBuffer = null; pBackBuffer = Resource.FromSwapChain<Texture2D>(pWindowObj.pSwapChain, 0); // get the backbuffer desc Texture2DDescription BBDesc = pBackBuffer.Description; // create the render target view RenderTargetViewDescription RTVDesc = new RenderTargetViewDescription(); RTVDesc.Format = BBDesc.Format; RTVDesc.Dimension = RenderTargetViewDimension.Texture2D; RTVDesc.MipSlice = 0; pWindowObj.pRenderTargetView = new RenderTargetView(pWindowObj.pDeviceObj.pDevice, pBackBuffer, RTVDesc); pBackBuffer.Dispose(); pBackBuffer = null; // Create depth stencil texture Texture2D pDepthStencil = null; Texture2DDescription descDepth = new Texture2DDescription(); descDepth.Width = pWindowObj.Width; descDepth.Height = pWindowObj.Height; descDepth.MipLevels = 1; descDepth.ArraySize = 1; descDepth.Format = Format.D16_UNorm; descDepth.SampleDescription = new SampleDescription(1, 0); descDepth.Usage = ResourceUsage.Default; descDepth.BindFlags = BindFlags.DepthStencil; descDepth.CpuAccessFlags = CpuAccessFlags.None; descDepth.OptionFlags = ResourceOptionFlags.None; pDepthStencil = new Texture2D(pWindowObj.pDeviceObj.pDevice, descDepth); // Create the depth stencil view DepthStencilViewDescription descDSV = new DepthStencilViewDescription(); descDSV.Format = descDepth.Format; descDSV.Dimension = DepthStencilViewDimension.Texture2D; descDSV.MipSlice = 0; pWindowObj.pDepthStencilView = new DepthStencilView(pWindowObj.pDeviceObj.pDevice, pDepthStencil, descDSV); pDepthStencil.Dispose(); // get various information if (pWindowObj.pAdapter != null) pWindowObj.AdapterDesc = pWindowObj.pAdapter.Description; if (pWindowObj.pOutput != null) pWindowObj.OutputDesc = pWindowObj.pOutput.Description; } private static Factory g_pDXGIFactory; static Color4[] colors = new[] { new Color4(1,0,0), new Color4(0,1,0), new Color4(0,0,1) }; static void RenderToAllMonitors() { // Clear them all for (int w = 0; w < g_WindowObjects.Count; w++) { WINDOW_OBJECT pWindowObj = g_WindowObjects.ElementAt(w); // set the render target pWindowObj.pDeviceObj.pDevice.ImmediateContext.OutputMerger.SetTargets(pWindowObj.pDepthStencilView, pWindowObj.pRenderTargetView); // set the viewport Viewport Viewport = new Viewport(); Viewport.X = 0; Viewport.Y = 0; Viewport.Width = pWindowObj.Width; Viewport.Height = pWindowObj.Height; Viewport.MinZ = 0.0f; Viewport.MaxZ = 1.0f; pWindowObj.pDeviceObj.pDevice.ImmediateContext.Rasterizer.SetViewports(Viewport); // Call the render function pWindowObj.pDeviceObj.pDevice.ImmediateContext.ClearRenderTargetView(pWindowObj.pRenderTargetView, colors[w]); } } static void PresentToAllMonitors() { for (int w = 0; w < g_WindowObjects.Count; w++) { WINDOW_OBJECT pWindowObj = g_WindowObjects.ElementAt(w); pWindowObj.pSwapChain.Present(0, 0); } } static void Main(string[] args) { g_pDXGIFactory = new Factory(); EnumerateAdapters(); CreateMonitorWindows(); CreateDevicePerAdapter(); CreateSwapChainPerOutput(); MessagePump.Run(g_WindowObjects.First().Form, () => { RenderToAllMonitors(); PresentToAllMonitors(); }); } } } [/CODE]
  7. Turns out this was caused by the [url="http://msdn.microsoft.com/en-us/library/bb172408(v=vs.85).aspx"]D3D10_RASTERIZER_DESC's AntialiasedLineEnable[/url] property (which I had set to "true") ... now I have it set to "false" and it works now regardless of the blend mode I'm using (is this a driver issue, since I guess this isn't supposed to happen?!). Anyway, I'd still love to hear suggestions if I could improve the math in my geometry shader in some way. Cheers
  8. [font=arial, verdana, tahoma, sans-serif][size=2] Hi,[/size][/font] I need to render lines with adjustable width in DX10 so I went ahead and thought this would be a good time to try out geometry shaders (for the first time), and I came up with the following shader which does essentially this: Application: * draws lines with LineList or LineStrip topology and my shader enabled VertexShader: * just transforms vertices to non-homogenous screen-space GeometryShader: * constructs a 4-vertex quad per line primitive * vertex positions are calculated based on the line direction / normal and the given line width (in screen-space) * extends the vertex positions in the line direction so the anchor points between two connected lines overlap PixelShader: * just outputs a simple color for now That's the shader code... [code] //#include "Include/ShaderStates.fx" Texture2D LineTexture; //----------------------------------------------------------------------------- // Constant Buffers //----------------------------------------------------------------------------- cbuffer cbChangeRare { float2 RenderTargetSize; } cbuffer cbChangePerFrame { } cbuffer cbChangePerObject { matrix WorldViewProjection; } //----------------------------------------------------------------------------- // Shader Input / Output Structures //----------------------------------------------------------------------------- struct VS_INPUT { float4 Position : POSITION0; }; struct GEO_IN { float4 Position : POSITION0; }; struct GEO_OUT { float4 Position : SV_POSITION; float2 TexCoord : TEXCOORD; }; struct PS_OUTPUT { float4 Color : SV_Target0; }; GEO_IN VS_LineV2(in VS_INPUT input) { GEO_IN output; output.Position = mul(input.Position, WorldViewProjection); //output.Position = input.Position; return output; } [maxvertexcount(6)] void GS(line GEO_IN points[2], inout TriangleStream<GEO_OUT> output) { float4 p0 = points[0].Position; float4 p1 = points[1].Position; float w0 = p0.w; float w1 = p1.w; p0.xyz /= p0.w; p1.xyz /= p1.w; float3 line01 = p1 - p0; float3 dir = normalize(line01); // scale to correct window aspect ratio float3 ratio = float3(RenderTargetSize.y, RenderTargetSize.x, 0); ratio = normalize(ratio); float3 unit_z = normalize(float3(0, 0, -1)); float3 normal = normalize(cross(unit_z, dir) * ratio); float width = 0.01; GEO_OUT v[4]; float3 dir_offset = dir * ratio * width; float3 normal_scaled = normal * ratio * width; float3 p0_ex = p0 - dir_offset; float3 p1_ex = p1 + dir_offset; v[0].Position = float4(p0_ex - normal_scaled, 1) * w0; v[0].TexCoord = float2(0,0); v[1].Position = float4(p0_ex + normal_scaled, 1) * w0; v[1].TexCoord = float2(0,0); v[2].Position = float4(p1_ex + normal_scaled, 1) * w1; v[2].TexCoord = float2(0,0); v[3].Position = float4(p1_ex - normal_scaled, 1) * w1; v[3].TexCoord = float2(0,0); output.Append(v[2]); output.Append(v[1]); output.Append(v[0]); output.RestartStrip(); output.Append(v[3]); output.Append(v[2]); output.Append(v[0]); output.RestartStrip(); } PS_OUTPUT PS_LineV2(GEO_OUT input) { PS_OUTPUT output; //output.Color = LineTexture.Sample(LinearSampler, input.TexCoord); //output.Color = float4(input.TexCoord.xy, 0, 1); output.Color = float4(1, 0.5, 0, 0.5); return output; } technique10 LineV2Technique { pass P0 { SetVertexShader(CompileShader(vs_4_0, VS_LineV2())); SetGeometryShader(CompileShader(gs_4_0, GS())); SetPixelShader(CompileShader(ps_4_0, PS_LineV2())); } }[/code] Now the problem with this shader is the following. If I enable alpha-blending I get a strange behaviour which seems to be related to the projection / camera view... This is what a test using the shader looks like with alpha-blending enabled and a look direction approximately along the Z-axis (X-Axis -> red, Y-Axis -> green, Z-Axis -> blue) [URL=http://imgur.com/B37K9][IMG]http://i.imgur.com/B37K9.png[/IMG][/URL] If I rotate the camera to the right a little ... [URL=http://imgur.com/vxmrG][IMG]http://i.imgur.com/vxmrG.png[/IMG][/URL] ... or to the left ... [URL=http://imgur.com/I4Bii][IMG]http://i.imgur.com/I4Bii.png[/IMG][/URL] Here I increased the line width and reduced the number of line elements so it's easier to see the artifacts that produce the issue... (I also changed the pixel-shader to print the texture coordinates ... output.Color = float4(input.TexCoord.xy, 0, 0.5); ... which are in the range 0,0 to 1,0 for each line segment) [URL=http://imgur.com/51t5F][IMG]http://i.imgur.com/51t5F.png[/IMG][/URL] The strange thing is that the error only occurs when alpha blending is enabled, with default blending the line looks fine. My guess is that I screw something up in the geometry shader about converting from non-homogeneous to homogeneous coordinate space or vice-versa, is the math in the geometry shader about right or total BS ? xD I'd be very thankful for any hints from someone who has more experience with writing (geometry) shaders than me. Thanks
  9. Hi all, Currently I'm trying to get viewspace reconstruction from the Hardware depth buffer working. I'm making use of the snippet by MJP from http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ In my GBuffer MRT Pass shader I do: [code] struct PS_INPUT { float4 Position : POSITION0; float4 vColor : COLOR0; float2 vTexUV : TEXCOORD0; float3 Normal : TEXCOORD1; float DofBlurFactor : COLOR1; float2 DepthZW : TEXCOORD2; }; struct PS_MRT_OUTPUT { float4 Color : SV_Target0; float4 NormalRGB_DofBlurA : SV_Target1; float Depth : SV_Depth; }; PS_INPUT VS( VS_INPUT input, uniform bool bSpecular ) { PS_INPUT output; ... ... output.Position = mul( input.Position, WorldViewProjection ); output.DepthZW.xy = output.Position.zw; ... ... return output; } PS_MRT_OUTPUT PS( PS_INPUT input, uniform bool bTexture ) { PS_MRT_OUTPUT output = (PS_MRT_OUTPUT)0; ... ... output.Depth = input.DepthZW.x / input.DepthZW.y; ... ... return output; } [/code] To reconstruct the viewspace position in my fullscreen post processing shader I use the following function (which mostly consists of the code found in MJPs article)... [code] float3 getPosition(in float2 uv) { // Get the depth value for this pixel float z = SampleDepthBuffer(uv); float x = uv.x * 2 - 1; float y = (1 - uv.y) * 2 - 1; float4 vProjectedPos = float4(x, y, z, 1.0f); // Transform by the inverse projection matrix float4 vPositionVS = mul(vProjectedPos, InverseProjection); // Divide by w to get the view-space position vPositionVS.z = -vPositionVS.z; return vPositionVS.xyz / vPositionVS.w; } [/code] Interestingly I have to invert the Z component of the reconstructed position, but it should make sense since the positive Z-Axis in the application points into the screen, whereas it usually points out of the screen for an default righthanded coordinate system. Still the reconstructed viewspace position as shown in the below screenshot is wrong... [IMG]http://i.imgur.com/0sOR0.png[/IMG] ... you can see that the position reconstruction is at least somehow working, but it should look more like this ... [IMG]http://img3.imageshack.us/img3/6245/nearlycorrect.jpg[/IMG] The biggest visual difference is that the background isn't black in my version, meaning that there might be something wrong with the depth ?? I'd be thankful for any hints if someone has an idea what could be wrong. [edit]: it looks like for background pixels the term "return vPositionVS.xyz / vPositionVS.w;" performs a division by zero, which is no good obviously, but why does this happen ? Many thanks, Regards
  10. Thanks for the confirmations ... I hope this thread will help someone in future, since I've spent quite some time trying to figure out these facts
  11. Hi, I didn't manage to find a satisfying answer on the webs, therefore I thought I'd ask here to make it clear for me and everyone who might be in the same premise in future. Browsing through several papers/presentations I found no definite answer on the question if it is possible to resolve a multisampled DepthStencil-Buffer using ResolveSubresource. Some contained hints that this might be possible in D3D10.1 but not in D3D10, but I've yet to find any code snippet / sample on the web that actually does that. I've only found people doing the resolve in HLSL via binding the MSAA DepthStencil-Buffer as a Texture2DMS in their shaders. Is this the only way to resolve a MSAA DepthStencilBuffer or can this be done from code somehow as well ?? At least when I tried to use ResolveSubresource on my MSAA DepthStencil-Buffer with an D3D10.1 device in my code, the debug log told me that the source parameter of ResolveSubresource can't have the BIND_DEPTH_STENCIL flag set. [u]Currently I'm under the impression that it is like follows:[/u] D3D10 ... [b]can't[/b] use ResolveSubresource on DepthBuffer / [b]can't[/b] access MSAA DepthBuffer directly from HLSL ... i.e. you must Render depth values to a texture on your own D3D10.1 ... [b]still [/b][b]can't[/b] use ResolveSubresource on DepthBuffer / [b]CAN[/b] access MSAA DepthBuffer directly from HLSL ... via Texture2DMS.Load(...) D3D11 ... I have no clue if anything has changed from D3D10.1 ... ??? I'd be really thankful if someone could shed some light on the situation and correct / complete the above listing, what are the possible ways to resolve an MSAA DepthStencil-Buffer in D3D10 / D3D10.1 / D3D11 ?? Cheers