Jump to content
  • Advertisement

Hexaa

Member
  • Content count

    9
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Hexaa

  • Rank
    Newbie

Personal Information

  • Interests
    Programming
  1. I use a device per window and create my objects out of it. If I close on window I dispose all objects (I hope) and I dispose the device. However, my memory usage seems to keep growing. I check it with: SharpDX.Diagnostics.ObjectTracker.FindActiveObjects() and it says Count = 0. I wonder what I can do or If I can use a method or something to clean everything and flush the commands regarding this device. I want a clean memory state back or no leak and endless grwoing on reopening the window till the memory usage crashes the application. Do you have any hints checking or fixing this? Ofc. it could still be that I am missing a dispose, but I checked with: [...]pDevice.QueryInterface<DeviceDebug>().ReportLiveDeviceObjects(ReportingLevel.Detail) and shows me nearly no objects or the objects that should be deleted right after I delete or dispose the device. No clue why it grows so much, seems like the managed memory is growing too. Thanks in advance !
  2. I use Direct3D11 via SharpDX. I don't use the same content, just logic among my windows. This way all windows have their own context, so it gets not corrupted. I want to improve that structure though, because I feel like it's wrong a bit.
  3. Hey, I have a rather general question. I have several controls that I need to draw scenes to. So far I set up my object structure so that each control has it's own renderer and objects, like own device, swapchain, thus context, shader etc... Now I wonder if I could optimize my setup and objects to reuse, for example, the shaders to only be compiled once (which for example takes a lot of time), because all the instances of the windows need the same shaders. So I faced the issue that I cannot have the shader objects static and reuse with another instance since that instance has it's own Device and there is a policy that says that the objects needs to be created from the same Device or whatever, which makes sense to me. I ask though, If my approach is wrong in the first place and I should rebuild my structure a bit to have only one device, and so I can reuse the objects and don't have to do work twice or can save memory. Or is it valid? I am new to Windows programming and GPU programming, so I apologize. If I use a single static device I run into strang behavior in my code, because the context state maybe gets corrupted. I am not sure though, because I thought only the GUI Thread will work them, but maybe I can't say that it always works on the same control and finishes before it handles the next. So it would corrupted and mix states, which would maybe explain the issues I face now. Would the solution be some kind of referred contexts? And schedule them to draw when the state is finished? I don't want to optimize my runtime performance, this seems to be very well, even with the multi device approach, but maybe I can optimize the memory and setup time. I have a hard time finding any good advice online, so thanks for any suggestions or hints :-)
  4. I updated to the latest nvidia driver and still the same though. Resources I don't know. Maybe yes... I don't know how to check that though. I couldn't figure it out till now. I found another more stable algorithm that is even simpler and produces variable line width without gaps on both devices. Although it is not so clever and uses more primitives in sum. The curves look even better actually. It is also a geometry shader that I tweaked a bit. I suspected it maybe to be some kind of imprecision in the case here, but I am not sure why and at what point exactly. If anyone else has a clue I might give it a try though to compare the performances with this new one I use. Thank you though :-)
  5. I try to draw lines with different thicknesses using the geometry shader approach from here: https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture. It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there? I develop the application with SharpDX btw. Any clues or help is very welcome
  6. Solved by disabling Aero Theme... Noticed the performance suffers a lot if the window gets over the start button. Switched to windows classic theme for test and worked. Still learned something with this tool you showed me at least, so thanks day saved
  7. Okay i'm trying to installing it now. Hopefully it can help me, thanks. I will report back if i have trouble or find something issue with it. thanks so far
  8. Haha yes it's pretty old hardware now, it is mandatory to support it though sadly Okay I had present sync interval set to 1 so it's more like a jump from ~16ms to 40ms, but you know. If i set it to sync interval 0 and change the creation of the graph window to like 1600x900 i get all my expected 2-3ms if not lower, even on that old hardware. I mean i can completely comment out my drawing and still, if i drag it to a certain resolution (from 1600x900 close to the resolution of maximum) it jumps from that 2-3ms to ~40ms. Staying in that resolution range isn't an option either though. There isn't any level in that, certain resolution and immidiate increase. It is not fullscreen and there is also a control panel on one side, so even in maximized it wont reach 1920x1080. It's confusing me a lot. I read about present() can block if frames are queued, i know to little about that though to say if that is the issue, it is called in an idle loop, so that might hint to that though. But again, can the gpu performance suffer so much at a tiny resolution increase, if i drag it higher. thanks for help
  9. 0 down vote favorite I develop a test application with directx 11 und fl 10.1. Everything is working as expected and fine, but when I maximize the window with my graphics in it, the time per frame increases drastically. like 1ms to 40ms. If it stays at a lower resolution range, it works perfectly. But i need to support it in the maximized resolution also. Destination hardware and software specs: NVS 300 graphics card Windows 7 32-bit resolution 1920x1080 Application that draws few sinuses with direct3d, c# via sharpdx Windows forms with a control and sharpdx initialized swapchain, programmed to change backbuffer on resize event (would occur without that too though) I used a System.Stopwatch to find the issue at the code line: mSwapChain.Present(1, PresentFlags.None); where the time it needs when maximized increases by a lot suddenly. If i drag and resize it manually at some resolution the frame time jumps, seems weird. If i comment out the drawing code, I get the same behavior. On my local development machine with a HD4400 i don't have this issue though, there it works and the frame time isn't affected by resizing at all. Any help is appreciated ! I am farily new to programming in c#, windows and with DirectX also, so please be kind
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!