Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

10 Neutral

About GalacticCrew

  • Rank

Personal Information


  • Twitter

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. GalacticCrew

    Music for your projects

    Hello HORRIBLEPRODUCTION. I am Benjamin, developer of Galactic Crew (see here). I will finish this game this winter. In addition, I am working on a second game. I can't tell too much here, because it has not been officially announced yet. However, I can say it will be a Retro-Adventure with a style similar to the old Monkey Island games. There are currently six 2D artists working on assets like backgrounds, sprite animations, etc. However, I need someone for the music part. Are you interested? If so, please send me an e-mail (benjamin.rommel@masabera.com). ­čÖé
  2. GalacticCrew

    What technology for a game-server

    Hello Angelic Ice! Before I published Galactic Crew as Early Access title on Steam last year, I had a Closed Alpha with around a hundred players. In order to analyze player behaviour, I gathered metrics like play-time, session length, computer hardware, etc. I needed a way to update all clients automatically and to collect the gathered data (All players agreed to that before downloading the Closed Alpha version!). For this purpose, I rented a vServer for around 9 ÔéČ per month and wrote my own server in C# that did exactly what I needed. Then, I built a custom TCP/IP layer in the Closed Alpha version and I was ready to go. First I was thinking about SQL data bases and other stuff, but I found it more suitable to create my own small console server.
  3. I just want to post a short update about my performance optimizations. As I wrote earlier, I can reduce my GPU load dramatically by using a smaller texture when creating the Depth Map. However, using a fixed light source that lightens the entire scene results in very rough shapes, because of the distance of the light source to some models and the small size of the Depth Map. This was the initial reason why I increased its size to 4096x4096 pixels. In order to improve the quality of my shadows I had to either increase the size of the Depth Map which would have decreased my performance or to find a better placement of the light source. After some tests, I decided to attach the light source to a specific offset to my camera position. That means, moving the camera moves also my light source. This has so many advantages! For example, when I zoom faaaaar out, I do not see any shadows, because the light source is too far away. This is great, because the shadows would be so small they could be mistaken for visual artifacts. The more I zoom in the more shadows I see. This looks really great in-game! I can also keep my small Depth Map size which is great for performance! I could also replace some calculations in the shaders with constants, because the camera and light source also always in the same relation! Finally, I further improved my Scene Graph that is based on a quad-tree space partitioning algorithm to only consider visible objects for rendering. This improved my performance on planetary missions dramatically. So far, I render each frame in default view in around 4 ms GPU time and 9 ms, when I zoom max. in. I am happy with that, but I will try to make further improvements based on your great feedback!
  4. I found it! First of all, I want to thank everyone who replied to my thread. I will do the optimizations suggested by Armagedon soon and I also have another ideas where I can save time and I want to experiment with different ways too smooth my shadows. I reduced the Pixel Shader to a size of a single line to return a static color and I had the a similar performance (11,5 instead of 11,7 ms). So, I simplified the Vertex Shaders and discarded everything that I don't need (because my new test Pixel Shader does no computations at all). As a result, the GPU was still around 11,5 ms. After that, I was sure that the Shader code can't be a problem. In addition, my PBR Pixel shader with Shadow Maps takes less than half the time than creating a simple Light Map? I dug deeper and looked at the other render passes. As it turned out, the render target for my depth shader was too large. I used a 4096x4096 px texture to get depth information. When I reduce the texture size to 1024x1024 px, the GPU time of my Light Map creation drops from 11,7 to 1,6 ms - without the optimizations suggested by Armagedon! Now, I spend the next to find a proper balance between shadow quality, speed and smoothness. Thank you very much for your support!
  5. Sure! The code is pretty similar to examples you can find on sites like Rastertek. I moved the creation of a light map into its own render pass to enable operations on the light map like smoothing. However, I need to make it more efficient first. Vertex shader cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; float4 Transparency; matrix ReflectionView; float4 LightPosition; matrix LightViewMatrix; matrix LightProjectionMatrix; float4 Instancing; } struct VS_IN { float4 pos : POSITION; float3 nor : NORMAL; float3 tan : TANGENT; float3 bin : BINORMAL; float4 col : COLOR0; float4 TextureIndices : COLOR1; float2 TextureUV : TEXCOORD0; matrix TestMatrix : POSITION1; }; struct PS_IN { float4 pos : SV_POSITION; float3 nor : NORMAL; float4 lightViewPosition : TEXCOORD1; }; PS_IN VS(VS_IN input) { PS_IN output = (PS_IN)0; matrix worldMatrix = World; if (Instancing.r == 1.0f) worldMatrix = input.TestMatrix; // Calculate position. matrix worldViewProjection = mul(mul(Projection, View), worldMatrix); output.pos = mul(worldViewProjection, input.pos); // Calculate the position of the vertice as viewed by the light source. matrix lightViewProjection = mul(mul(LightProjectionMatrix, LightViewMatrix), worldMatrix); output.lightViewPosition = mul(lightViewProjection, input.pos); // Calculate the normal vector against the world matrix only and then normalize the final value. output.nor = mul((float3x3)worldMatrix, input.nor); output.nor = normalize(output.nor); return output; } Pixel shader cbuffer ConstantBuffer : register(b0) { float4 Settings; float4 CameraDir; float4 ViewDir; float4 LightPos; float4 LightDir; float4 LightCol; } Texture2DArray ShadowTextures : register(t4); SamplerState SamplerWrap : register(s0); SamplerState SamplerClamp : register(s1); struct PS_IN { float4 pos : SV_POSITION; float3 nor : NORMAL; float4 lightViewPosition : TEXCOORD1; }; float4 PS(PS_IN input) : SV_Target { // Set the default output color to the ambient light value for all pixels. float4 result = float4(0.2f, 0.2f, 0.2f, 1.0f); // Calculate the projected texture coordinates. float2 projectTexCoord; projectTexCoord.x = input.lightViewPosition.x / input.lightViewPosition.w / 2.0f + 0.5f; projectTexCoord.y = -input.lightViewPosition.y / input.lightViewPosition.w / 2.0f + 0.5f; // Determine if the projected coordinates are in the 0 to 1 range. If so then this pixel is in the view of the light. if ((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. float shadowValue = ShadowTextures.Sample(SamplerClamp, float3(projectTexCoord, 0)).r; // Calculate the depth of the light. float lightDepthValue = input.lightViewPosition.z / input.lightViewPosition.w; // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if (lightDepthValue < shadowValue) { // Invert the light direction for calculations. float3 lightDir = -LightDir; lightDir = normalize(lightDir); // Calculate the amount of light on this pixel based on the bump map normal value. float lightIntensity = saturate(dot(input.nor, lightDir)); // Determine the final diffuse color based on the diffuse color and the amount of light intensity. if (lightIntensity > 0.0f) result = float4(1.0f, 1.0f, 1.0f, 1.0f); } } return result; }
  6. The term "lag" is older than internet video games. It simply describes a situation in which there is no fluent movement. It can be caused by different situations. In my case, rendering a frame simply takes too long. GPU Profiler I started my game using Microsoft Visual Studio's built-in GPU performance profiler. After the game loaded, I waited in the default view for several second before zooming into the scene. After that, I stopped the profiler and checked the numbers. First, I had a look at a 1 second interval when my game was rendered in the default view. I sorted all tasks by their GPU time. The most time-consuming tasks were "GPU Work". The first DrawIndexed and DrawIndexedInstanced calls (the ones that took the most time) used 745,467 ns of GPU time. Then, I selected 1s interval from the time when I had zoomed in. The most time consuming calls were DrawIndexed and DrawIndexedInstanced with 4,370,433 ns. So it is safe to say that drawing my models is what makes the game slow when I zoom in. Scissor Rectangle I must have missed this topic when I created the foundation of my game engine three years ago. I've read several tutorials and guides about it and added support for Scissor Tests into my game engine. As a first test, I used the same rectangle for Scissoring as I use for my Viewport. When I zoom in, I get basically the same numbers as I had before I used the Scissor Rectangle. However, when I zoom out into the default view, all numbers are the same except for the SwapChain-Present call. It takes around 11 ms instead 0d 0.5 as usual. I use the Present call like this: _swapChain.Present(1, SharpDX.DXGI.PresentFlags.None); As a test, I decreased the edge size of the Scissor rectangle by 50% and placed it in the center of the screen. So, only the 25% of the screen in the center are drawn. I started the game again and checked the numbers. I have the same result for the default view, which makes sense because almost the entire spaceship is in the central area of the screen. When I zoom in, my shadow map rendering stage takes 3,3ms instead of 11, which is around 28%. This means that rendering too many 3D pixels causes my increased PGU time. Conclusion After my tests, I know that drawing my (instanced) models use most of the GPU time. I also know using more screen space to render 3D models results in increasing GPU time. Now, I need to figure out how to reduce the GPU time for my draw calls. Render steps When rendering my scene, I do the following steps: Get depth info. I render my scene from the point of view of my primary light source into a depth buffer. I will use this buffer for Shadow Mapping. This step takes round 2.2 ms in both scenarios (zoom in and out). Get lighting info. I render the scene from the point of view of the player with the depth buffer texture I obtained in the previous step to create a light map. This light map indicates which pixels are colored and which are not. This is the step that is causing my troubles. It takes 2,5 ms when zoomed out, but more than 11ms when zoomed in. Render scene. I render the scene again and use the light map to create shadows (Shadow Mapping). It takes 1,9 ms when zoomed out and 4 ms when zoomed in. SwapChain.Present. This became more time consuming since I use Scissor Rectangles as described before. Open question When zoomed in, rendering my scene with texturing, lighting, Shadow Mapping, etc. takes twice as much time. However, building the light map (which does not use multi-texturing, lighting, etc.) takes almost 5 times more time. I need to figure out why! The vertex and pixel shader are not that complicated..
  7. I haven't tried the graphics debugger of Visual Studio. I will try it out on Thursday (I'll take a day off tomorow). As it turned out, I did not use Scissor Rectangles so far. I tested it today by setting it to the same size as my viewport. I did not see any change in the GPU for the different scenarios. The background is a set of images drawn with Direct2D. Everthing except the background is in 3D. When I zoom in I draw more pixels in 3D than when I zoom out. Is this a problem? I though the calls for the vertex shader are the same, because all models are drawn in both cases (zoom in and zoom out). The pixel shader for my tests were set to a simple return of a static color.
  8. Hi folks, I have a problem and I really could use some ideas from other professionals! I am developing my video game Galactic Crew including its own game engine. I am currently working on improved graphics which includes shadows (I use Shadow Mapping for that). I observed that the game lags, when I use shadows, so I started profiling my source code. I used DirectX 11's Queries to measure the time my GPU spends on different tasks to search for bottlenecks. I found several small issues and solved them. As a result, the GPU needs around 10 ms per frame, which is good enough for 60 FPS (1s / 60 frames ~ 16 ms/frame). See attachment Scene1 for the default view. However, when I zoom into my scene, it starts to lag. See attachment Scene2 for the zoomed view. I compared the times spent on the GPU for both cases: default view and zoomed view. I found out that the render passes in which I render the full scene take much longer (~11 ms instead of ~2ms). One of these render stages is the conversion of the depth information to the Shadow Map and the second one is the final draw of the scene. So, I added even more GPU profiling to find the exact problem. After several iteration steps, I found this call to be the bottleneck: if (model.UseInstancing) _deviceContext.DrawIndexedInstanced(modelPart.NumberOfIndices, model.NumberOfInstances, 0, 0, 0); else _deviceContext.DrawIndexed(modelPart.NumberOfIndices, 0, 0); Whenever I render a scene, I iterate through all visible models in the scene, set the proper vertex and pixel shaders for this model and update the constant buffer of the vertex shader (if required). After that, I iterate through all positions of the model (if it does not use instancing) and iterate through all parts of the model. For each model part, I set the used texture maps (diffuse, normal, ...), set the vertex and index buffers and finally draw the model part by calling the code above. In one frame for example, 11.37 ms were spent drawing all models and their parts, when I zoomed it. From these 11.37 ms 11.35ms were spent in the drawing calls I posted above. As a test, I simplified my rather complex pixel shader to a simple function that returns a fixed color to make sure, the pixel shader is not responsible for my performance problem. As it turned out, the GPU time wasn't reduced. Does anyone of you have any idea what causes my lag, i.e. my long GPU time in the drawing calls? I don't use LOD or anything comparable and I also don't use my BSP scene graph in this scene. It is exactly the same content, but with different zooms. Maybe I missed something very basic. I am grateful for any help!!
  9. GalacticCrew

    Exception when creating a 2D render target

    Dear SoldierOfLight, thank you very much! My player installed this update manually and he could play the game afterwards! Thank you!!
  10. GalacticCrew

    Exception when creating a 2D render target

    No, I am not. He told me he has all Windows Updates. I can't spy my players' computer and I don't want to do it :-) I will ask him.
  11. A new player of my game reported an issue. When he starts the game, it immediately crashes, before he even can see the main menu. He sent me a log file of my game and it turns out that the game crashes, when my game creates a 2D render target. Here is the full "Interface not supported" error message: HRESULT: [0x80004002], Module: [General], ApiCode: [E_NOINTERFACE/No such interface supported], Message: Schnittstelle nicht unterst├╝tzt bei SharpDX.Result.CheckError() bei SharpDX.Direct2D1.Factory.CreateDxgiSurfaceRenderTarget(Surface dxgiSurface, RenderTargetProperties& renderTargetProperties, RenderTarget renderTarget) bei SharpDX.Direct2D1.RenderTarget..ctor(Factory factory, Surface dxgiSurface, RenderTargetProperties properties) bei Game.AGame.Initialize() Because of the log file's content, I know exactly where the game crashes: Factory2D = new SharpDX.Direct2D1.Factory(); _surface = backBuffer.QueryInterface<SharpDX.DXGI.Surface>(); // It crashes when calling this line! RenderTarget2D = new SharpDX.Direct2D1.RenderTarget(Factory2D, _surface, new SharpDX.Direct2D1.RenderTargetProperties(new SharpDX.Direct2D1.PixelFormat(_dxgiFormat, SharpDX.Direct2D1.AlphaMode.Premultiplied))); RenderTarget2D.AntialiasMode = SharpDX.Direct2D1.AntialiasMode.Aliased; I did some research on this error message and all similar problems I found were around six to seven years old, when people tried to work with DirectX 11 3D graphics and Dirext 10.1 2D graphics. However, I am using DirectX 11 for all visual stuff. The game runs very well on the computers of all other 2500 players. So I am trying to figure out, why the source code crashes on this player's computer. He used Windows 7 with all Windows Updates, 17179 MB memory and a NVIDIA GeForce GTX 870M graphics card. This is more than enough to run my game. Below, you can see the code I use for creating the 3D device and the swap chain. I made sure to use BGRA-Support when creating the device, because it is required when using Direct2D in a 3D game in DirectX 11. The same DXGI format is used in creating 2D and 3D content. The refresh rate is read from the used adapter. // Set swap chain flags, DXGI format and default refresh rate. _swapChainFlags = SharpDX.DXGI.SwapChainFlags.None; _dxgiFormat = SharpDX.DXGI.Format.B8G8R8A8_UNorm; SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); // Get proper video adapter and create device and swap chain. using (var factory = new SharpDX.DXGI.Factory1()) { SharpDX.DXGI.Adapter adapter = GetAdapter(factory); if (adapter != null) { // Get refresh rate. refreshRate = GetRefreshRate(adapter, _dxgiFormat, refreshRate); // Create Device and SwapChain _device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); _swapChain = new SharpDX.DXGI.SwapChain(factory, _device, GetSwapChainDescription(clientSize, outputHandle, refreshRate)); _deviceContext = _device.ImmediateContext; } }
  12. GalacticCrew

    CreateSwapChain throws an exception

    Unfortunately, the changes had no effect. I use a default value of 60 Hz. Then I search for supported refresh rates for the current resolution. His console output says, the 60 Hz ist supported by his screen. I have absolutely no idea what's going wrong and the player gets impatient. :-/ Console.WriteLine("\t\tCreate refresh rate (60)."); SharpDX.DXGI.Rational refreshRate = new SharpDX.DXGI.Rational(60, 1); Console.WriteLine("\t\tSelect supported refresh rate."); for (int j = 0; j < adapter.Outputs.Length; j++) { SharpDX.DXGI.ModeDescription[] modeDescriptions = adapter.Outputs[j].GetDisplayModeList(dxgiFormat, SharpDX.DXGI.DisplayModeEnumerationFlags.Interlaced | SharpDX.DXGI.DisplayModeEnumerationFlags.Scaling); bool isDone = false; foreach (var description in modeDescriptions) if (description.Width == userInfo.Resolution.Width && description.Height == userInfo.Resolution.Height) { refreshRate = description.RefreshRate; Console.WriteLine("\t\t\tSelected refresh rate = {0}/{1} ({2})", description.RefreshRate.Numerator, description.RefreshRate.Denominator, description.RefreshRate.Numerator / description.RefreshRate.Denominator); isDone = true; break; } if (isDone) break; }
  13. GalacticCrew

    CreateSwapChain throws an exception

    I updated the test tool and send it to the player. Hopefully, he can no create a swap chain. I'll let you know when I get feedback!
  14. GalacticCrew

    CreateSwapChain throws an exception

    Do you mean my SharpDX version? I use SharpDX 4.0.1. I don't know the GPU drivers of the player. Is it relevant?
  15. A player of my game contacted me, because the game crashes during start-up. After taking a look into log file he sent me, calling CreateSwapChain results in an exception as shown below. HRESULT: [0x887A0001], Module: [SharpDX.DXGI], ApiCode: [DXGI_ERROR_INVALID_CALL/InvalidCall], Message: Unknown at SharpDX.Result.CheckError() at SharpDX.DXGI.Factory.CreateSwapChain(ComObject deviceRef, SwapChainDescription& descRef, SwapChain swapChainOut) at SharpDX.Direct3D11.Device.CreateWithSwapChain(Adapter adapter, DriverType driverType, DeviceCreationFlags flags, FeatureLevel[] featureLevels, SwapChainDescription swapChainDescription, Device& device, SwapChain& swapChain) at SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType driverType, DeviceCreationFlags flags, SwapChainDescription swapChainDescription, Device& device, SwapChain& swapChain) In order to investigate this player's problem, I created a test application that looks like this: class Program { static void Main(string[] args) { Helper.UserInfo userInfo = new Helper.UserInfo(true); Console.WriteLine("Checking adapters."); using (var factory = new SharpDX.DXGI.Factory1()) { for (int i = 0; i < factory.GetAdapterCount(); i++) { SharpDX.DXGI.Adapter adapter = factory.GetAdapter(i); Console.WriteLine("\tAdapter {0}: {1}", i, adapter.Description.Description); bool supportsLevel10_1 = SharpDX.Direct3D11.Device.IsSupportedFeatureLevel(adapter, SharpDX.Direct3D.FeatureLevel.Level_10_1); Console.WriteLine("\t\tSupport for Level_10_1? {0}!", supportsLevel10_1); Console.WriteLine("\t\tCreate refresh rate (60)."); var refreshRate = new SharpDX.DXGI.Rational(60, 1); Console.WriteLine("\t\tCreate mode description."); var modeDescription = new SharpDX.DXGI.ModeDescription(0, 0, refreshRate, SharpDX.DXGI.Format.R8G8B8A8_UNorm); Console.WriteLine("\t\tCreate sample description."); var sampleDescription = new SharpDX.DXGI.SampleDescription(1, 0); Console.WriteLine("\t\tCreate swap chain description."); var desc = new SharpDX.DXGI.SwapChainDescription() { // Numbers of back buffers to use on the SwapChain BufferCount = 1, ModeDescription = modeDescription, // Do we want to use a windowed mode? IsWindowed = true, Flags = SharpDX.DXGI.SwapChainFlags.None, OutputHandle = Process.GetCurrentProcess().MainWindowHandle, // Cout in 'SampleDescription' means the level of anti-aliasing (from 1 to usually 4) SampleDescription = sampleDescription, SwapEffect = SharpDX.DXGI.SwapEffect.Discard, // DXGI_USAGE_RENDER_TARGET_OUTPUT: This value is used when you wish to draw graphics into the back buffer. Usage = SharpDX.DXGI.Usage.RenderTargetOutput }; try { Console.WriteLine("\t\tCreate device (Run 1)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.None, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); Console.WriteLine("\t\tCreate swap chain (Run 1)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } try { Console.WriteLine("\t\tCreate device (Run 2)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter, SharpDX.Direct3D11.DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_1 }); Console.WriteLine("\t\tCreate swap chain (Run 2)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } try { Console.WriteLine("\t\tCreate device (Run 3)."); SharpDX.Direct3D11.Device device = new SharpDX.Direct3D11.Device(adapter); Console.WriteLine("\t\tCreate swap chain (Run 3)."); SharpDX.DXGI.SwapChain swapChain = new SharpDX.DXGI.SwapChain(factory, device, desc); } catch (Exception e) { Console.WriteLine("EXCEPTION: {0}", e.Message); } } } Console.WriteLine("FIN."); Console.ReadLine(); } } In the beginning, I am collecting information about the computer (processor, GPU, .NET Framework version, etc.). The rest should explain itself. I sent him the application and in all three cases, creating the swap chain fails with the same exception. In this test program, I included all solutions that worked for other users. For example, AlexandreMutel said in this forum thread, that device and swap chain need to share the same factory. I did that in my program. So using different factories is not a problem in my case. Laurent Couvidou said here: The player has Windows 7 with .NET Framework 4.6.1, which is good enough to run my test application or game which use .NET Framework 4.5.2. The graphics cards (Radeon HD 6700 Series) is also good enough to run the application. In my test application, I also checked, if Feature Level 10_1 is supported, which is the minimum requirement for my game. A refresh rate of 60 Hz should also be no problem. Therefore, I think the parameters are fine. The remaining calls are just three different ways to create a device and a swap chain for the adapter. All of them throw an exception when creating the swap chain. There are also Battlefield 3 players who had problems with running BF3. As it turned out, BF3 had some issue with Windows region settings. But I believe that's not a problem in my case. There were also compatibility issues in Fallout 4, but my game runs on many other Windows 7 PCs without any problems. Do you have any idea what's wrong? I already have a lot of players that play my game without any issues, so it's not a general problem. I really want to help this player, but I currently can't find a solution.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!