GalacticCrew

Members
  • Content count

    13
  • Joined

  • Last visited

Community Reputation

5 Neutral

About GalacticCrew

  • Rank
    Member

Personal Information

  • Industry Role
    Programmer
  • Interests
    Production
    Programming
  1. DX11 CreateSwapChain throws an exception

    Unfortunately, the changes had no effect. I use a default value of 60 Hz. Then I search for supported refresh rates for the current resolution. His console output says, the 60 Hz ist supported by his screen. I have absolutely no idea what's going wrong and the player gets impatient. :-/ Console.WriteLine("\t\tCreate refresh rate (60)."); SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); Console.WriteLine("\t\tSelect supported refresh rate."); for (int j = 0; j < adapter.Outputs.Length; j++) { SharpDX.DXGI.ModeDescription[] modeDescriptions = adapter.Outputs[j].GetDisplayModeList(dxgiFormat, SharpDX.DXGI.DisplayModeEnumerationFlags.Interlaced | SharpDX.DXGI.DisplayModeEnumerationFlags.Scaling); bool isDone = false; foreach (var description in modeDescriptions) if (description.Width == userInfo.Resolution.Width && description.Height == userInfo.Resolution.Height) { refreshRate = description.RefreshRate; Console.WriteLine("\t\t\tSelected refresh rate = {0}/{1} ({2})", description.RefreshRate.Numerator, description.RefreshRate.Denominator, description.RefreshRate.Numerator / description.RefreshRate.Denominator); isDone = true; break; } if (isDone) break; }
  2. DX11 CreateSwapChain throws an exception

    I updated the test tool and send it to the player. Hopefully, he can no create a swap chain. I'll let you know when I get feedback!
  3. DX11 CreateSwapChain throws an exception

    Do you mean my SharpDX version? I use SharpDX 4.0.1. I don't know the GPU drivers of the player. Is it relevant?
  4. A player of my game contacted me, because the game crashes during start-up. After taking a look into log file he sent me, calling CreateSwapChain results in an exception as shown below. HRESULT: [0x887A0001], Module: [SharpDX.DXGI], ApiCode: [DXGI_ERROR_INVALID_CALL/InvalidCall], Message: Unknown at SharpDX.Result.CheckError() at SharpDX.DXGI.Factory.CreateSwapChain(ComObject deviceRef, SwapChainDescription& descRef, SwapChain swapChainOut) at SharpDX.Direct3D11.Device.CreateWithSwapChain(Adapter adapter, DriverType driverType, DeviceCreationFlags flags, FeatureLevel[] featureLevels, SwapChainDescription swapChainDescription, Device& device, SwapChain& swapChain) at SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType driverType, DeviceCreationFlags flags, SwapChainDescription swapChainDescription, Device& device, SwapChain& swapChain) In order to investigate this player's problem, I created a test application that looks like this: class Program { static void Main(string[] args) { Helper.UserInfo userInfo = new Helper.UserInfo(true); Console.WriteLine("Checking adapters."); using (var factory = new SharpDX.DXGI.Factory1()) { for (int i = 0; i < factory.GetAdapterCount(); i++) { SharpDX.DXGI.Adapter adapter = factory.GetAdapter(i); Console.WriteLine("\tAdapter {0}: {1}", i, adapter.Description.Description); bool supportsLevel10_1 = SharpDX.Direct3D11.Device.IsSupportedFeatureLevel(adapter, SharpDX.Direct3D.FeatureLevel.Level_10_1); Console.WriteLine("\t\tSupport for Level_10_1? {0}!", supportsLevel10_1); Console.WriteLine("\t\tCreate refresh rate (60)."); var refreshRate = new SharpDX.DXGI.Rational(60, 1); Console.WriteLine("\t\tCreate mode description."); var modeDescription = new SharpDX.DXGI.ModeDescription(0, 0, refreshRate, SharpDX.DXGI.Format.R8G8B8A8_UNorm); Console.WriteLine("\t\tCreate sample description."); var sampleDescription = new SharpDX.DXGI.SampleDescription(1, 0); Console.WriteLine("\t\tCreate swap chain description."); var desc = new SharpDX.DXGI.SwapChainDescription() { // Numbers of back buffers to use on the SwapChain BufferCount = 1, ModeDescription = modeDescription, // Do we want to use a windowed mode? IsWindowed = true, Flags = SharpDX.DXGI.SwapChainFlags.None, OutputHandle = Process.GetCurrentProcess().MainWindowHandle, // Cout in 'SampleDescription' means the level of anti-aliasing (from 1 to usually 4) SampleDescription = sampleDescription, SwapEffect = SharpDX.DXGI.SwapEffect.Discard, // DXGI_USAGE_RENDER_TARGET_OUTPUT: This value is used when you wish to draw graphics into the back buffer. Usage = SharpDX.DXGI.Usage.RenderTargetOutput }; try { Console.WriteLine("\t\tCreate device (Run 1)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.None, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); Console.WriteLine("\t\tCreate swap chain (Run 1)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } try { Console.WriteLine("\t\tCreate device (Run 2)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); Console.WriteLine("\t\tCreate swap chain (Run 2)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } try { Console.WriteLine("\t\tCreate device (Run 3)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter); Console.WriteLine("\t\tCreate swap chain (Run 3)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } } } Console.WriteLine("FIN."); Console.ReadLine(); } } In the beginning, I am collecting information about the computer (processor, GPU, .NET Framework version, etc.). The rest should explain itself. I sent him the application and in all three cases, creating the swap chain fails with the same exception. In this test program, I included all solutions that worked for other users. For example, AlexandreMutel said in this forum thread, that device and swap chain need to share the same factory. I did that in my program. So using different factories is not a problem in my case. Laurent Couvidou said here: The player has Windows 7 with .NET Framework 4.6.1, which is good enough to run my test application or game which use .NET Framework 4.5.2. The graphics cards (Radeon HD 6700 Series) is also good enough to run the application. In my test application, I also checked, if Feature Level 10_1 is supported, which is the minimum requirement for my game. A refresh rate of 60 Hz should also be no problem. Therefore, I think the parameters are fine. The remaining calls are just three different ways to create a device and a swap chain for the adapter. All of them throw an exception when creating the swap chain. There are also Battlefield 3 players who had problems with running BF3. As it turned out, BF3 had some issue with Windows region settings. But I believe that's not a problem in my case. There were also compatibility issues in Fallout 4, but my game runs on many other Windows 7 PCs without any problems. Do you have any idea what's wrong? I already have a lot of players that play my game without any issues, so it's not a general problem. I really want to help this player, but I currently can't find a solution.
  5. You're welcome! I wrote my game engine based on SharpDX with DirectX 11 and I found them very useful. The tutorials from Rastertek are very good in my opinion and someone converted them all to SharpDX. The code is not optimized, but I found it incredible useful for learning things.
  6. I am sorry for my late response, but I was out of office. I did not have the time to read the presentations you linked, but I will read them this weekend. I could solve my performance issue. In my game engine, I check all installed graphics adapter and I use the one with the highest memory and the highest possible Feature Level. As it turned out, my testing laptop gave the same result for all graphics adapter although an Intel HD 630 chip is clearly weaker than my GeForce GTX 1050 Ti chip. I adjusted the system settings and now everything runs smooth. VSync was turned off. In my main menu, I had around 2 ms per frame, because no 3D models were rendered. When I used a more complex scene in my game, the GPU needed 80 ms per frame. Therefore, I had lag. Using the correct graphics chips, the GPU was reduced by a factor of 10 to around 1 ms per frame in my default case and around 10 ms in very complex scenes. I will update my FAQ, so all players know about this issue.
  7. One of the NPC models has 3,339 vertices. The other NPC models will have a similar amount of vertices. I am running my tests on a Laptop with a GeForce GTX 1050 Ti and Intel(R) HD Graphics 630. At the moment, I do not use any type of post-processing. My engine can handle topics like Soft Shadowing, but they are all disabled at the moment (because of performance problems on older computers).
  8. Hello, I want to improve the performance of my game (engine) and some of your helped me to make a GPU Profiler. After creating the GPU Profiler, I started to measure the time my GPU needs per frame. I refined my GPU time measurements to find my bottleneck. Searching the bottleneck Rendering a small scene in an Idle state takes around 15.38 ms per frame. 13.54 ms (88.04%) are spent while rendering the scene, 1.57 ms (10.22%) are spent during the SwapChain.Present call (no VSync!) and the rest is spent on other tasks like rendering the UI. I further investigated the scene rendering, since it takes über 88% of my GPU frame rendering time. When rendering my scene, most of the time (80.97%) is spent rendering my models. The rest is spent to render the background/skybox, updating animation data, updating pixel shader constant buffer, etc. It wasn't really suprising that most of the time is spent for my models, so I further refined my measurements to find the actual bottleneck. In my example scene, I have five animated NPCs. When rendering these NPCs, most actions are almost for free. Setting the proper shaders in the input layout (0.11%), updating vertex shader constant buffers (0.32%), setting textures (0.24%) and setting vertex and index buffers (0.28%). However, the rest of the GPU time (99.05% !!) is spent in two function calls: DrawIndexed and DrawIndexedInstance. I searched this forum and the web for other articles and threads about these functions, but I haven't found a lot of useful information. I use SharpDX and .NET Framework 4.5 to develop my game (engine). The developer of SharpDX said, that "The method DrawIndexed in SharpDX is a direct call to DirectX" (Source). DirectX 11 is widely used and SharpDX is "only" a wrapper for DirectX functions, I assume the problem is in my code. How I render my scene When rendering my scene, I render one model after another. Each model has one or more parts and one or more positions. For example, a human model has parts like head, hands, legs, torso, etc. and may be placed in different locations (on the couch, on a street, ...). For static elements like furniture, houses, etc. I use instancing, because the positions never change at run-time. Dynamic models like humans and monster don't use instancing, because positions change over time. When rendering a model, I use this work-flow: Set vertex and pixel shaders, if they need to be updated (e.g. PBR shaders, simple shader, depth info shaders, ...) Set animation data as constant buffer in the vertex shader, if the model is animated Set generic vertex shader constant buffer (world matrix, etc.) Render all parts of the model. For each part: Set diffuse, normal, specular and emissive texture shader views Set vertex buffer Set index buffer Call DrawIndexedInstanced for instanced models and DrawIndexed models What's the problem After my GPU profiling, I know that over 99% of the rendering time for a single model is spent in the DrawIndexedInstanced and DrawIndexed function calls. But why do they take so long? Do I have to try to optimize my vertex or pixel shaders? I do not use other types of shaders at the moment. "Le Comte du Merde-fou" suggested in this post to merge regions of vertices to larger vertex buffers to reduce the number of Draw calls. While this makes sense to me, it does not explain why rendering my five (!) animated models takes that much GPU time. To make sure I don't analyse something I wrong, I made sure to not use the D3D11_CREATE_DEVICE_DEBUG flag and to run as Release version in Visual Studio as suggested by Hodgman in this forum thread. My engine does its job. Multi-texturing, animation, soft shadowing, instancing, etc. are all implemented, but I need to reduce the GPU load for performance reasons. Each frame takes less than 3ms CPU time by the way. So the problem is on the GPU side, I believe.
  9. DX11 Using GPU Profiling to find bottlenecks

    I have extended my GPU Profiling classes. Now, I measure all intervals hierarchical. So, I can see the duration for each operation in each function. I located my bottleneck and I will further investigate how to solve it. If I can't solve it, I will open a new threat. My question here has been successfully answered. Thank you very much!
  10. DX11 Using GPU Profiling to find bottlenecks

    No problem, Infinisearch. I used the information from the posts above to write my own GPU profiler using the links provided by MJP. I wrote two classes. The class GPUInterval is an interval you are interested in, e.g. the time that is used to render a scene. The class GPUProfiler is a container for a set of GPUIntervals and does all calculations. namespace Engine.Game.Profiler { public class GPUInterval { private readonly SharpDX.Direct3D11.Device _device; private readonly SharpDX.Direct3D11.DeviceContext _deviceContext; private SharpDX.Direct3D11.Query _startQuery; private SharpDX.Direct3D11.Query _endQuery; public string Name { get; private set; } public double Duration { get; private set; } public GPUInterval(SharpDX.Direct3D11.Device device, SharpDX.Direct3D11.DeviceContext deviceContext, string name) { _device = device; _deviceContext = deviceContext; Name = name; // Create timestamp query. _startQuery = new SharpDX.Direct3D11.Query(_device, new SharpDX.Direct3D11.QueryDescription() { Type = SharpDX.Direct3D11.QueryType.Timestamp, Flags = SharpDX.Direct3D11.QueryFlags.None }); _endQuery = new SharpDX.Direct3D11.Query(_device, new SharpDX.Direct3D11.QueryDescription() { Type = SharpDX.Direct3D11.QueryType.Timestamp, Flags = SharpDX.Direct3D11.QueryFlags.None }); } public void Start() { _deviceContext.End(_startQuery); } public void Stop() { _deviceContext.End(_endQuery); } public void Calculate(long frequency) { long startTime; while (!_deviceContext.GetData(_startQuery, out startTime)) System.Threading.Thread.Sleep(1); long endTime; while (!_deviceContext.GetData(_endQuery, out endTime)) System.Threading.Thread.Sleep(1); Duration = ((endTime - startTime) * 1000.0) / frequency; } } } namespace Engine.Game.Profiler { public class GPUProfiler { private readonly SharpDX.Direct3D11.Device _device; private readonly SharpDX.Direct3D11.DeviceContext _deviceContext; private SharpDX.Direct3D11.Query _disjointQuery; public List<GPUInterval> Intervals { get; private set; } public GPUProfiler(SharpDX.Direct3D11.Device device, SharpDX.Direct3D11.DeviceContext deviceContext) { _device = device; _deviceContext = deviceContext; // Create disjoint query. _disjointQuery = new SharpDX.Direct3D11.Query(_device, new SharpDX.Direct3D11.QueryDescription() { Type = SharpDX.Direct3D11.QueryType.TimestampDisjoint, Flags = SharpDX.Direct3D11.QueryFlags.None }); // Create intervals list Intervals = new List<GPUInterval>(); } public void StartFrame() { _deviceContext.Begin(_disjointQuery); } public void EndFrame() { _deviceContext.End(_disjointQuery); // Retrieve frequency. SharpDX.Direct3D11.QueryDataTimestampDisjoint queryDataTimestampDisjoint; while (!_deviceContext.GetData(_disjointQuery, out queryDataTimestampDisjoint)) System.Threading.Thread.Sleep(1); // Calculate the duration of all intervals. if (!queryDataTimestampDisjoint.Disjoint) { foreach (var interval in Intervals) interval.Calculate(queryDataTimestampDisjoint.Frequency); } } } } I created four GPUIntervals to check the same regions I mentioned in my initial post: The entire render function Rendering the scene (drawing models, setting constant shaders, ...) Rendering UI Calling SwapChain.Present Here are the numbers for a random frame while the game is idling: Entire render function = 16.35 ms Render scene = 15.00 ms Render UI = 0.26 ms SwapChain.Present = 1.08 ms These numbers are no big surprise, because the render scene does all the work. However, it is interesting that rendering the UI (which has A LOT of elements) and presenting the Swap Chain is fine. Tomorrow, I will investigate the different parts of my scene rendering. I will keep you updated!
  11. DX11 Using GPU Profiling to find bottlenecks

    Thank you for your replies. I will try use implement my own profiling mechanism. I also edited my post a little bit. There were many typos in it. I wrote it after finishing my working day... :-/ When writing "ms" I mean milli-seconds. I have 60 frames per second, i.e. each frame takes around 16 ms. An accumulated render time for my scene of 170 ms means that each frame I spend around 170 / 60 = 2.83 ms for rendering the scene.
  12. In some situations, my game starts to "lag" on older computers. I wanted to search for bottlenecks and optimize my game by searching for flaws in the shaders and in the layer between CPU and GPU. My first step was to measure the time my render function needs to solve its tasks. Every second I wrote the accumulated times of each task into my console window. Each second it takes around 170ms to call render functions for all models (including settings shader resources, updating constant buffers, drawing all indexed and non-indexed vertices, etc.) 40ms to render the UI 790ms to call SwapChain.Present <1ms to do the rest (updating structures, etc.) In my Swap Chain description I set a frame rate of 60 Hz, if it's supported by the computer. It made sense for me that the Present function waits some time until it starts the next frame. However, I wanted to check, if this might be a problem for me. After a web search I found articles like this one, which states My drivers are up-to-date so that's no issue. I installed Microsoft's PIX, but I was unable to use it. I could configure my game for x64, but PIX is not able to process DirectX 11.. After getting only error messages, I installed NVIDIA's NSight. After adjusting my game and installing all components, I couldn't get a proper result, because my game freezes after a few frames. I haven't figured out why. There is no exception or error message and other debug mechanisms like log messages and break points tell me the game freezes at the end of the render function after a few frames. So, I looked for another profiling tool and found Jeremy's GPUProfiler. However, the information returned by this tool is too basic to get an in-depth knowledge about my performance issues. Can anyone recommend a GPU Profiler or any other tool that might help me to find bottlenecks in my game and or that is able to indicate performance problems in my shaders? My custom graphics engine can handle subjects like multi-texturing, instancing, soft shadowing, animation, etc. However, I am pretty sure, there are things I can optimize! I am using SharpDX to develop a game (engine) based on DirectX 11 with .NET Framework 4.5. My graphics cards is from NVIDIA and my processor is made by Intel.
  13. Hello, Do you know this site: https://github.com/Dan6040/SharpDX-Rastertek-Tutorials? This guy converted all Tutorials from Rastertek to SharpDX. I found them very useful when learning DirectX. I use SharpDX with DirectX 11, but you might find something useful for you!