• Content count

  • Joined

  • Last visited

Community Reputation

183 Neutral

About Dr_Asik

  • Rank
  1. Microsoft is pouring a lot more money into .NET than C++, and you will have a lot more trouble compiling that legacy Visual C++ 6.0 application in the latest VS than the C# 1.0 application, that's for sure. C# doesn't technically need bindings to call OS functions, you just import them with P/Invoke and it's well documented how to do so for just about any function you might want. See e.g. http://www.pinvoke.net/ I have worked on various large-scale Windows software in .NET for many years and this is not an issue whatsoever. Speaking of bindings, C# has such great bindings for Direct3D that I'd much rather use the bindings than the actual API!
  2. All you need to do is ensure your user has the correct version of .NET Framework installed. Your project properties show which version of .NET you are targeting. If your app targets .NET 4.5, for instance, you need to ensure .NET 4.5 is installed. No need to go tracking every single dll. The only dlls you need to ship are your own and those of the libraries you are using (outside of the .NET Framework). Common examples are Newtonsoft.Json, SharpDX, etc. The laziest way is just to assume it's there (lots of people have it already, either through Windows Update or another program required it) and if it fails, point your user towards a download link where they can install it. That's not a great user experience though. If you can include the redist and run it automatically on install, that's probably the best way. Steam games all do this AFAIK. https://www.virtualbox.org/
  3. It's interesting to see how mentalities have evolved. 10 years ago when I started, people would generally agree C++ was the way to go as a first language for game development, and I was the weird guy recommending C# or Python instead. :P
  4. unique_ptr is one of many C++-specific concepts you will have to learn. It's definitely a difficult language to start with; you'd have a far easier time with Python for example. And it's not like Python's limitations would be an issue for you anytime soon; you can write great games and even emulators in it if you want. By the time you're advanced enough to run into some of its limitations, you'll be in a great position to decide where to go from there and whether that's C++ or something else.
  5. So instead of defining a destination point, you define a destination zone? That opens up some more questions. Determining in advance the shape of the zone, taking into account static obstacles within it and the number of units going there; deciding which unit goes where inside the zone, preferably as close as possible to the center... I'm not sure it's a good idea to try to compute these things in advance. I think a system where each unit tries to get to the destination, pushing idle units out of the way (SC2-style) will adapt to obstacles dynamically and occupy close-to-optimal space without having to compute it explicitly, however the rules governing who pushes who could be complex. Obviously if every unit has to reach the point before stopping, they'll be shuffling for a very long time in case of large groups. How does the unit decide it's made a good enough effort if it can't exactly reach the point, when does it stop trying?
  6. In an RTS you can select a group of units and tell them to go to a point on the map. Obviously not every unit can actually go to that point since they occupy space. One easy but not entirely general way is to define a group formation, interpret the destination as the middle of that formation and send each unit to a position within that formation. Some games work almost entirely like this, e.g. Age of Empires 2. Forcing units to always have a formation is restrictive though, and sometimes it's impossible to get in the right formation due to obstacles and other units. So we need a fallback strategy. In other words a unit must have a definition of what it means to have "reached" the destination, even though it may not be possible to reach the exact destination point and it may not be possible to occupy a pre-defined position within a formation. Age of Empires 2 doesn't handle this well; units go to the "nearest" legal position even though that may cause them to clip with other units. I've looked a lot at what Starcraft 2 does when you select a group that's too spread to keep formation (or just click in the middle of the space it occupies). Units seem to all converge towards the destination point, but then some push the others to some extent and some let themselves be pushed and the result is unpredictable, although acceptable in general. It is assumed players can just keep clicking the same point if they want units to try and group more closely. It's really hard to tell what the exact rules are though. There seems to be a lot of literature about group movement but not much about how the pathfinding actually terminates. Any good references or insight?
  7. If your tickrate is 60fps then you don't even need interpolation on most monitors. Just present the latest state of your simulation on each vsync, bingo. You can use interpolation to support higher refresh rates, and then being one frame behind at 120hz and up really isn't a big deal.
  8. SharpDX has been around for a long time and is very mature. It's also a bit nicer to use than the C++ API because you get code completion for enums, exceptions when things don't work, stronger typing etc. The thing is all documentation for D3D is for the C++ API so you have to do the back-and-forth (which is pretty trivial but still). The bigger benefit is programming in C# of course.
  9. I found the post in question and implemented it with frame statistics. It does work with just 1 frame of latency when the window covers the screen and there's nothing on top! This is promising. But having the framerate drop by half whenever these conditions aren't met is really nasty... Imagine someone using any kind of overlay while they are playing, how can we know? I guess I could keep track of the framerate and switch to a higher-latency method if it seems we're not rendering at a correct speed but this is getting complicated. I hope one of these two bugs (frame statistics in composed mode or extra latency in waitable swap chain) is fixed so it's possible to implement this relatively simply.
  10. So using a waitable swap chain with a maximum frame latency of 1, I'm seeing 2 frames of latency (~20ms with a 100hz refresh rate). This is an improvement over 3 as previously. When the window covers the screen, the present method changes from "Composed: Flip" to "Hardware: Independent Flip"; however, the latency remains ~20ms. Maybe I'm running into the aforementioned issue with frame latency? Is there a way for me to check?
  11. Is that issue just for D3D12 though? I'm using D3D11.
  12. So I was experimenting with PresentMon and different modes in Direct3D 11, using a basic game loop (SharpDX.Desktop.Windows.RenderLoop) and just rendering a rectangle at the cursor's location so I can get a good feel for latency just by moving the mouse. This is what PresentMon says and it definitely feels like it: Fullscreen SyncInterval PresentFlags    SwapEffect  Latency #frames   Tearing true       0            None            Discard      0                yes true       1            None            Discard      2                no false      0            None            Discard      1                no false      1            None            Discard      3                no I have many questions, sorry: The only way I can get less than 2 frames of latency is by uncapping the framerate, is that something I could address by using other settings? Maybe a waitable swap chain? Ideally I'd want to have windowed + capped framerate + 1 frame of latency. Why does going from 0 to 1 SyncInterval add 2 frames of latency in fullscreen? I can understand 1, but why 2? Changing MaximumFrameLatency doesn't affect these numbers in any way, I suppose this is because my game loop does practically nothing but I don't really understand how the mechanics work?
  13. I'm wondering if the cost of splitting the data into 3 textures won't offset the cost benefit of simplifying the pixel shader. Also, I will also need shaders that convert packed YUV formats (i.e. Y0 U0 Y1 V0 etc.) and it'll be impossible to cost-effectively split the data before conversion. I might as well do the conversion in software in that case.   As a more general solution, I'm thinking of rendering the texture into a render target of exactly the same size so as to avoid any artifacts related to point filtering, and then draw that render target unto the backbuffer which will apply filtering on the final result.
  14. Someone at Stackoverflow said I should use Texture.Load instead which doesn't perform any filtering. That indeed sounds more like what I really need for this kind of processing, what do you think?
  15. You're my hero! That was it and it's a logical explanation of the phenomenon.