Jump to content

  • Log In with Google      Sign In   
  • Create Account

kuroioranda

Member Since 16 Apr 2007
Offline Last Active Nov 06 2013 07:59 PM

Posts I've Made

In Topic: SlimDX Matrix class broken in Windows 8.1?

24 October 2013 - 02:03 PM

This is just a side effect of how Microsoft chose to package the D3DX libraries. Each release of the SDK has it's own revision, and they aren't interchangeable. This is why Steam always reinstalls DirectX even if you have the latest version, because the version specific libraries might not have been previously installed if no other program on your machine had been linked against that specific version. You pretty much have to either have the user install the version of D3D that SlimDX was linked against or else remove the D3DX dependency.

More info:
http://forums.steampowered.com/forums/showpost.php?p=23759166&postcount=47


In Topic: Direct3D 11 Present makes up 98% of FPS ?

24 October 2013 - 01:46 PM

Okay but this is very slow isn't it ?? I have the cascade shadow demo from the smaple browser and have there about 400 fps with shadows and light, a very big scene and textures. And here is only a little mesh with 1 texture which takes so long ....

 

No, because it's proportional. If you are just issuing a call to render a single mesh, that is very, very fast. Present may be slower, but it only seems so expensive because it's it's slower than doing a very fast thing.

For a real world comparison, this is similar to saying that a donut at 50 cents is expensive because a stick of gum only costs 5 cents. The donut is not expensive, they gum is just insanely cheap.


In Topic: Main loop timing

24 October 2013 - 01:36 PM


I used sleep() and on a high-end CPU it worked fine but on an other machine it produced weird effects. It was because it always woke up much later as it was expected. I also heard others say that sleep is unreliable. ....so I dont use it anymore, except when the game is paused.


Was the high end machine running win 8? Win 8 waits the exact amount of time (within certain limitations) that the next sleeping thread needs to wake up, so it isn't dependent on any timer resolution like previosu versions.


("PerformanceCounter". I tried this: on one machine it returned negative values, on an other one it was going back and forth between two values. So I use it only for profiling and I use timeGetTime for the game time.)


This is pretty normal. QueryPerformanceCounter isn't reliable on all CPUS, so even if you do use it for loop timing you need to back it up with some other less precise timer (like timeGetTime()) than can let you know when QueryPerformanceCounter is grossly incorrect.



Back to L. Spiro.


I am an R&D programmer at tri-Ace, where I work on this engine:


I know, I've seen you around. I wasn't trying to make an argument from authority, I was just starting to feel that maybe you thought I was a hobbyist spouting off. Which probably backfired because then you probably thought that that was what I thought. I also do heavy lifting engine work, btw, although not on a dedicated R&D team.


Keeping this in mind, I can hardly believe you even said the following:

After all my efforts, if I were to then discover that some idiot wasted a whole millisecond of a frame sleeping for no reason, I kid you not I would punch him or her in the face. Twice.


I didn't say that, I said in the case where you already had oodles of spare CPU time. That's not "no reason", and it's not happening when that spare ms would hurt your mesh skinning because in that case there's no oodles of spare CPU cycles.

Your linked mech skinning example, btw, being one where I absolutely would never sleep, as you're on a console and know you're already sucking up every available cycle, and always will be. This though is apples and oranges, you're countering an argument I've made for games on Windows or phones with a console game. They are different environments, they have different engineering considerations.

 I'm not sure if I'm not explaining things very well or what, but please try to understand what I'm saying. I am not talking about consoles. I am not talking about systems which are always just squeaking by. I am not talking about sleeping when you have no time to spare. I'm saying that battery life/heat is really, really important for usability on platforms that are inherently backwards compatible and likely to be portable. As I have said before, many times, everything you have said is absolutely, one hundred percent true, *under certain circumstances*. Console development being one of them. But I think you do hobbyists and students a great disservice by dismissing concerns that actually are part of the platforms they are most likely to be developing on.

If FTL or the Duck Tales remake were to peg my CPU at 100% while I'm running in a window (no vsync), even if they only needed 25-30% of a core to be smooth, I would be very, very sad, and would not consider that to be a good engineering decision. Likewise, as cutting edge titles age, they will also take a smaller and smaller amount of CPU time. I don't want Quake2 to be pegging my modern laptop CPU at 100% when it now only needs 10% to be smooth. It's brainless engineering to get the best of both worlds by simply not calling sleep if you don't have cycles sleep. If you want to add a checkbox to toggle this, go for it, but as a usability thing, I'd leave it limited by default.

Anyway, I think I've explained my reasons about as well as I can at this point.

In Topic: Main loop timing

23 October 2013 - 04:54 PM


I don’t know what you mean by the limit of your updates. It may help if we all use a consistent set of terms: Render Update, Logical Update, and Game Loop. I assume you mean a logical update.

 

Yes, logical update. Rendering more frames than your logical update rate gives the user no additional information. Sure, you can make it smoother (to the limits of max(refresh rate, human eye perception), unless vsync is off and then you are racing tears), but if your logical update is 30 and you are rendering at 60, the user can't actually influence the world and the actors won't update themselves any faster than 30.

 


#1: I disagree with “strongly disagreeing to it”. I agree with “giving the player options”, and what you suggest is “taking away options”. In fact, the game engineers typically try to max out the CPU’s performance, using all available resources for whatever they can. And if you are playing a game, the performance of the rest of the applications on you machine don’t really matter unless the game is minimized, in which case yes, I do wait 10 milliseconds between game loops to give back the CPU power. As far as real-time caps go there are 2 things to consider:
-> #a: Too much power can overheat the system and fry parts. So the motivation for a cap is not related to refresh rates or starving other applications etc., it is about not frying the system.
-> #b: Therefor any cap at all should be based on getting the maximum performance out of the CPU without physically killing it. Which is extremely rare these days, and there are often system-wide settings the user can enable to prevent this. Do not force a cap on the user unless it is in the multiple hundreds of FPS’s such that no human eye can detect the difference. There are plenty of things people can do themselves, if and only if necessary, without you forcing it on them.

 

.All valid points, the problem is that I think you are forgetting that the OPs intent is to lower this CPU usage. He's already told us he has plenty of CPU time left over between updates, and he doesn't want to peg the CPU. For bleeding edge, AAA, barely running on a platform games, everything that you suggested applies in force. I don't think that's what we're dealing with here though. For smaller, less CPU intensive games of the usual hobbyest or indie flavor (which I assume is what we are looking at here), getting things looking good while not destroying the laptop batteries of your casual audience is very important to user experience. I also love choices, but a lot of users don't understand the implications of those choices. In this case, I see very little benefit to allowing hundreds of fps if the updates are fixed and the display has a cap of what it can show the user anyways. It doesn't matter how fast your eye is if the transmitting medium is only feeding them so fast. Tweaking your settings to get the best possible performance out of your latest big game is great and an important tool, but when performance of your game already fits nicely in modern machines with cycles to spare they are not nearly as important.

(I am myself a gameplay and systems engineer professionally, btw, not a hobbyist).


Your question is deceptively broad, so there are many things to say in reply.


My apologies, I have asked it poorly then. I did intend to ask "Is there a better way to sleep for a more accurate time?", as you hit on.


With that made clear, and then to restate your question as, “Is there a better way to sleep for a more accurate time?”, the answer is No. Which makes it easy to draw the wrong conclusion—it would be easy to misunderstand and decide, “Then I guess that’s that—increase the timer resolution and sleep.” Do draw the correct conclusion, we need to keep deducing.


And here I must disagree again, because your conclusion is correct in the wrong circumstances. For contemporary AAA games, yes, you probably have the CPU saturated and sleeping is a moot point, waiting is much better. For smaller games, however, as I said above, this is not the case, and other factors become equally if not more important to consider than raw FPS.

However, I would argue this counterpoint even in the AAA case. Today's big budget CPU hogging games are tomorrows throw on your portable and go titles. I would always code a game to sleep when you have oodles of spare time and just wait otherwise. This way you get the best of both worlds. Today, you get performance, but tomorrow, you get to take Quake2 with you on the plane and catch a few games without destroying your battery at a rapid rate.


The game loop should require much finer granularity and reliability, this waiting is the correct solution.


Current consoles do around 30FPS in most games overall, but frame variability is insane. Sometimes your frames come in under 16ms because nothing interesting is happening on screen, sometimes you need to calculate a path or init an expensive AI behavior and you'll spike 150ms+ for a frame. This is also true on PC, although the numbers tend to be tighter because the hardware is so much better. This is neither granular nor terribly reliable, but it is the reality. Really the only thing that matters is that any invariability is imperceptible to the user, and a tick granularity of 1ms is well, well below that. If you get a few ms ahead here or a few ms behind here, unless you are running hundreds of FPS in a super twitchy game with a monitor that can actually display those frames, none of this is noticable by the user.

In Topic: Main loop timing

23 October 2013 - 03:06 PM

I read somewhere you should use WaitForSingleObject (or similar) and not sleep, as that signals to the scheduler that indeed you do want to do some processing in a short moment although you wait for something, because with sleep it would assume its some very non-timecritical program and could put the cpu into a very low power state screwing your timing up?

 

I hadn't heard of that used for game timing (although I've seen it in callstacks before), but sadly WaitForSingleObject appears to also be dependent on the timer resolution specified by timeBeginPeriod().


PARTNERS