Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Extreme Framerates


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
16 replies to this topic

#1 3TATUK2   Members   -  Reputation: 730

Like
0Likes
Like

Posted 17 September 2013 - 06:21 PM

Am I correct in thinking that a framerate of higher than the eye can literally manage to discern (even if it's 1000's) can still be smoother - because the frame you actually register in your sight is actually closer in time to the moment you register it than it would be if the framerate were at or below your eye's registration capability framerate?
 
ok - let's "assume" even if it's not realistic that "the eye has a fixed framerate"
 
because this can be applied a different way
 
 
Doing rendering in a separate thread than events will mean the same thing - that since the time interval between the both of them may be smaller than either individual time interval - the latest registered event will be closer in time to the rendering?
 

Attached Thumbnails

  • eh.png


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31117

Like
3Likes
Like

Posted 17 September 2013 - 11:08 PM

1) Yes, rendering at extreme framerates can make an animate appear smoother (up to some really high limit, about ~200fps, where you get severely diminishing returns).

 

You have to keep in mind that your monitor has a limited refresh rate though. Most monitors are only capable of displaying 60 frames per second anyway!

If vsync is enabled, then it's not possible to display more than 60 frames per second.

If vsync is disabled, then at higher frame-rates, one frame will be cut off at some vertical height, and the next frame spliced in. This is called "tearing", and it can be very jarring on the viewer, and actually make things appear to be less smooth!

 

2) No, having a dedicated event processing thread is a complete waste of time.

In your diagram, you show rendering as a line. You need to show it as a box, as it takes time. At the start of the box, a certain set of events are consumed to determine what will be rendered. At the end of the box, this rendering is sent to the screen.

If any events arrive in the duration of that box, then they have to be queued up and consumed by the next render...

 

The OS already does this internally. As soon as an event arrives, the OS puts it into a queue. Right before rendering, you typically poll the OS for any events that have arrived since last time you asked, and you use these to influence what to draw next. Adding your own extra thread to reinvent this same functionality will only add overhead.

unCwhRN.png

The red lines show the time elapsed from when an event is generated to when it is sent to the screen -- i.e. the latency. In both systems (single threaded and event/render threads), the latency will be the same.

 

Reducing the size of the green boxes == increasing the frame-rate == reducing latency.

 

So if you want it to be more responsive and smoother, then yes, you need to increase the frame-rate. However, current displays put a hard limit on how much you can do this.



#3 NickW   Members   -  Reputation: 313

Like
1Likes
Like

Posted 18 September 2013 - 12:36 AM

Increasing bandwidth can lead to a decrease in latency due to the exact effect you're talking about.  Decreasing latency will almost always make your game feel better and more responsive no matter how it's done.  Another way you can go about is, for example, sampling the player input as closest to the end of the frame as possible, or just making sure that nothing in your game is causing input processing to get delayed.

 

Using a separate rendering thread doesn't usually decrease latency because the rendering thread usually has to run a frame behind.  You can't render and update the same frame at the same time, otherwise you end up with temporal inconsistencies between frames (i.e. jitter).  Rendering on a separate thread is mainly a performance benefit to take advantage of multiple cores.



#4 tonemgub   Members   -  Reputation: 1143

Like
0Likes
Like

Posted 18 September 2013 - 01:32 AM

Why doesn't anyone mention the GetMessageTime function in this kind of discussions? I think the events-handling routine should always use the GetMessageTime as the time when an event happened, instead of a specific frame-relative time. It takes a bit more work ordering the events (only if they arrive out-of order from the message queue - I don't know) and processing them in that order, but the end result should be no latency at any framerate (at least not input-related latency).

I'm also sure DirectInput has something similar, or if you're coding for another platform than Windows, there's bound to be something similar... It's the first thing I would look for.

EDIT: Searched the forums for "GetMessageTime" - this is the first post mentioning it. :)

#5 Hodgman   Moderators   -  Reputation: 31117

Like
1Likes
Like

Posted 18 September 2013 - 02:13 AM

Why doesn't anyone mention the GetMessageTime function in this kind of discussions? I think the events-handling routine should always use the GetMessageTime as the time when an event happened, instead of a specific frame-relative time. It takes a bit more work ordering the events (only if they arrive out-of order from the message queue - I don't know) and processing them in that order, but the end result should be no latency at any framerate (at least not input-related latency).

I'm also sure DirectInput has something similar, or if you're coding for another platform than Windows, there's bound to be something similar... It's the first thing I would look for.

I know L.Spiro has mentioned this, having gone through the effort to implement it himself wink.png

[edit] I was mistaken, he's implemented an alternate way of time-stamping inputs [/edit] 

It is tricky to implement though, as the message timer wraps around and probably isn't consistent with your game's actual timer.

 

 

However, this doesn't result in having no input latency. There's a lot of unavoidable latency:

  1. You press a key, the driver takes the input and passes it on to the OS.
  2. The game requests inputs from the OS, and uses them to update the game-state.
  3. The game renders a new frame based on the new game-state. This produces a stream of commands to the GPU.
  4. The GPU eventually starts executing these commands after having buffered them to maximize it's internal bandwidth.
  5. The GPU finishes executing these commands, and queues up the final image to be displayed.
  6. The monitor decodes the signal and displays the image.

Assuming a 60Hz game and display:

Event #1 is almost instant.

Event #2 (updating a game frame) and #3 (rendering a frame) depends on the game. Let's say it's right on the 60Hz limit of 16.6ms, in total between the Update and Draw functions.

Event #4 depends on the game and the graphics driver. Typically a driver will buffer at least one whole frame's worth of GPU commands, but may be more -- say 16.6 - 50ms here.

Event #5 depends on the game, but let's say it's right on the 60Hz limit of 16.6ms again, spending this time performing all the GPU-side commands.

Event #6 depends on the monitor. On a CRT this would be 0ms, but on many modern monitors it's 1-2 frames -- another 16.6 - 33.3ms

 

That's a total of between 50 - 116.6ms (typically it's around 80ms) between pressing a key and seeing any changes on the screen, for a game that's running perfectly at 60Hz with "zero" internal input latency.

i.e. even in a perfect situation where you capture an event as soon as it occurs, and you immediately process a new frame using that data, you've still got a lot of unavoidable latency. They key objective is to not add any more than is necessary! Having "three frames" of input latency is a best-case that many console games strive for tongue.png

 

 

Back to the message timer though:

Typically, at the start of a frame-update, you fetch all the messages from the OS and process them as if they arrived just now.

With the use of this message timer, you fetch them at the start of a frame, but process them as if they arrived a bit earlier than that.

 

In a 60Hz game, for example, this might mean that an event arrived at the start of this frame, but actually occurred n% * 16.6ms ago (where n > 0% and n < 100%). If it's a movement command, etc, then you can move the player 100%+n% (e.g. 101%-199%) of the normal per-frame distance to compensate. This would have the benefit of slightly reducing the perceived latency. e.g. if your objectively measured latency is 80ms, your perceptual latency might be around 63 - 80ms wink.png

 

Personally, I'd only bother with this technique if you'd already done everything else possible to reduce latency first, but YMMV cool.png


Edited by Hodgman, 18 September 2013 - 02:53 AM.


#6 L. Spiro   Crossbones+   -  Reputation: 14026

Like
0Likes
Like

Posted 18 September 2013 - 02:44 AM

EDIT: Searched the forums for "GetMessageTime" - this is the first post mentioning it. smile.png

http://www.gamedev.net/topic/630735-multithreading-in-games/#entry4977300

The main problem being that it is not synchronized with your in-game timer.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#7 3TATUK2   Members   -  Reputation: 730

Like
0Likes
Like

Posted 18 September 2013 - 02:58 AM


The OS already does this internally. As soon as an event arrives, the OS puts it into a queue. Right before rendering, you typically poll the OS for any events that have arrived since last time you asked, and you use these to influence what to draw next. Adding your own extra thread to reinvent this same functionality will only add overhead.

 

Does what NickW said make sense?

 

 

 


Another way you can go about is, for example, sampling the player input as closest to the end of the frame as possible

 

Does it mean that it's possible to sample the input closer to the start of the render function with separate threads - or is this pointless since, like you mentioned, the OS queues the stuff up so sampling directly before rendering all in a single thread is just as good as it gets? Is this still valid if both these steps (processing input & rendering) are done in a framerate-restrictive loop? Like say you setFPS( 120 ) - then, should *only* the rendering be restricted to 120, with events running in a faster unrestricted parent loop - or should they both be inside the time restrictor?


Edited by 3TATUK2, 18 September 2013 - 04:45 AM.


#8 tonemgub   Members   -  Reputation: 1143

Like
0Likes
Like

Posted 18 September 2013 - 06:10 AM

  • The GPU eventually starts executing these commands after having buffered them to maximize it's internal bandwidth.
  • The GPU finishes executing these commands, and queues up the final image to be displayed.
  • The monitor decodes the signal and displays the image.
AFAIK, all of these are done in a separate driver-thread, and the driver always does them as fast as possible. They don't influence the game's internal latency - which is what I was talking about. This is what I meant by "input-related latency", i.e., latency caused by delayed processing of the mouse/keyboard events. The latency introduced by GPU-processing is unavoidable and I don't think it's worth the effort going that far as to account for it in a video game? However, if you do code your event handler to use frame-relative times instead of the actual event times, then yes - your input latency will also be affected by your framerate, which depends on how fast the GPU renders your frame, and also by your game-state updating code. IMHO, when it comes to thinking about the GPU's role in all of this, it's not that important that it renders your frame immediately or after some milliseconds, but it is important that the frame represents the exact game state that your game is in at the time you asked it to present your frame. Anyway, the numbers you mentioned seem a bit exaggerated to me. :)

Best example would be a game that renders everything at only 1 FPS - for whatever reason (who knows, this might be a nice-looking effect :) ). If you only compute the game state every second and you don't use the event's real time, you'll notice that your objects on the screen are affected too much by a keypress that only took 1-5 milliseconds at best, because you will be computing the object's velocities/positions etc. at one-second intervals. So instead of moving for only that 5 milliseconds' worth of time, your objects will move for 1 whole second. It would be more accurate (and frame-rate independent) to use the event's real time.

And no matter how much you time things trying to account for different hardware latencies, you will not be able to always get the constant frame rate that you plan for, so basing your event handler on frame-times is always a bad idea IMO.

#9 Hodgman   Moderators   -  Reputation: 31117

Like
1Likes
Like

Posted 18 September 2013 - 08:02 AM

 

AFAIK, all of these are done in a separate driver-thread, and the driver always does them as fast as possible. They don't influence the game's internal latency - which is what I was talking about.

Does what NickW said make sense?

 

Yes.

Does it mean that it's possible to sample the input closer to the start of the render function with separate threads

If the user-input is required before rendering can begin, it makes no difference which thread grabs it from the OS. No matter how many threads there are, the rendering can't commence until you've decided on the user-input to process for that frame.

No matter how many threads you're using, you should make sure to get the user input as close as possible to rendering as you can. E.g. with one thread, you would want to change the left-hand order of processes to the right-hand one (assuming that the AI doesn't depend on the user input).

GetUserInput    ProcessAI
ProcessAI    -> GetUserInput
Draw            Draw

AFAIK, all of these are done in a separate driver-thread, and the driver always does them as fast as possible. They don't influence the game's internal latency - which is what I was talking about.

The driver doesn't necessarily do them as fast (as in "soon") as possible (to optimize for latency). It may (and often will) purposely delay GPU commands in order to optimize for overall bandwidth / frame-rate. By sacrificing latency, it opens up more avenues for parallelism.
 
N.B. I didn't contradict you, or say that measuring input timestamps was useless. It's just a small figure compared to the overall input latency -- the time from the user pressing a key to the user seeing a result.

Anyway, the numbers you mentioned seem a bit exaggerated to me

You can test them yourself. Get a high-speed video camera, ideally twice as fast as your monitor (for a 60Hz monitor you'd want a 120Hz camera), but equal speed to the monitor will do if you don't have a high-speed one (e.g. just 60Hz), though that will be less accurate. Then film yourself pressing an input in front of the screen, and count the recorded frames from when it was pressed to when the screen changes. Ideally you'd want to use a kind of input device that has an LED on it that lights up when pressed, so it's obvious on the recording which frame to start counting from.
If you get 3 frames or less delay, you're on par wink.png Try it on a CRT, a cheap HDTV and an expensive LCD monitor, and you'll get different results too.
 

The latency introduced by GPU-processing is unavoidable and I don't think it's worth the effort going that far as to account for it in a video game?

It's unavoidable that there will be some inherent latency, but you do have an influence over how much there is. They way you submit your commands, the amount of commands, the dependencies that you create from GPU->CPU, the way you lock/map GPU resources, whether/when you flush the command stream, how you 'present'/'swap' the back-buffer, etc, all have an impact on the GPU latency.
By optimizing there, you could shave a whole frame or two off your overall input latency, which is why I'd personally make those optimizations first, before worrying about message time-stamps, which can only shave less than 1 frame off your input latency.
 
If it's worth measuring input timestamps to reduce latency for a video game, why wouldn't other methods also be worth the effort?
 

And no matter how much you time things trying to account for different hardware latencies, you will not be able to always get the constant frame rate that you plan for, so basing your event handler on frame-times is always a bad idea IMO.

I'm not suggesting that you take GPU latency into account in your input handler.
You shouldn't code your input handler to account for this latency (unless you're making a game like Guitar Hero, where you absolutely require perfect input/display synchronisation), but if input-latency is important to you, then you should code your GPU components in such a way to reduce GPU latency as a high priority task (n.b. guitar hero is also designed to run at 60fps with minimal GPU latency).
 

Best example would be a game that renders everything at only 1 FPS

Yeah, at that time-scale, then message time-stamps have a much greater effect-- especially on the case where a key is pressed for less than one frame.
Most games are designed to run at either 30fps or 60fps, not 1fps though tongue.png
 
At 1fps, if someone taps a key for 1ms, then the error in not using timestamps will be huge -- you'll either assume they tapped it for 0ms, or 1000ms!
At 1fps If someone pressed a key and holds it for 30 seconds, then using timestamps or not has a much smaller difference (the error of ignoring timestamps is lower) -- you'll either assume they held it for 29 or 30 seconds.
However, at 30fps, the difference is less extreme.
If someone taps a key for 1ms, and you assume they tapped it for 33ms, that's still a 33x difference, but it's imperceptible in most games.
And if they hold a key for 30 seconds, but you assume they held it for 29.967 seconds, that's almost certainly imperceptible.
 
Even pro-gamers struggle to provide more than 10 inputs per second (100ms per input), so 33ms error isn't the top of most people's priority lists. Not that that it shouldn't be dealt with though!

 


The main problem being that it is not synchronized with your in-game timer.
Out of interest (I've never done this) how do you deal with this issue tonemgub?

Is it possible to measure how far in the past an event occurred? From what I can tell, you can only measure the elapsed time between two different inputs.

That's great for cases where a key is pressed and released on the same frame (which happens and would be very important at 1fps, but is rare at 60fps), but when rendering, there doesn't seem to be a way to determine that, e.g. "this key was pressed 10ms before the render function"...

 

AFAIK, L.Spiro has dealt with this by deciding to make use of a dedicated input processing thread, which attaches it's own timestamps?



#10 Krypt0n   Crossbones+   -  Reputation: 2606

Like
1Likes
Like

Posted 18 September 2013 - 09:00 AM

 


Another way you can go about is, for example, sampling the player input as closest to the end of the frame as possible

 

Does it mean that it's possible to sample the input closer to the start of the render function with separate threads - or is this pointless since, like you mentioned, the OS queues the stuff up so sampling directly before rendering all in a single thread is just as good as it gets? Is this still valid if both these steps (processing input & rendering) are done in a framerate-restrictive loop? Like say you setFPS( 120 ) - then, should *only* the rendering be restricted to 120, with events running in a faster unrestricted parent loop - or should they both be inside the time restrictor?

 

it doesn't make sense to render with a higher frequency than your monitor shows, so that's the practical limit. NickW is right, you should focus on reducing the latency. e.g. if you could render with 1000fps, but you show just 60Hz, estimate when the next 'flip' will happen, estimate how long you gonna need to render the frame, start processing the input+rendering so you'll be done with it right before your HW allows you the next flip.

(the common way is to process everything and issue the flip that now stalls for 15ms)

 

 

also, a lot of people make the mistake to see the frame processing as one big black box where you set the input at the beginning and get the output on the other side after a lot of magic happens inbetween. fact is, that most information processed in that time is not relevant for the subjective recognition of "lagginess", simple things you can do to shorten the subjective latency

1. trigger effects based on input after rendering the frame, before post processing. e.g. a muzzle-flash could be triggered with minor impact on the processing of the whole frame, yet the player would notice it. (btw. it's not just about visuals, hearing immediately a sound is just as important).

2. trigger the right animation right before you render an object. from an software architecture point of view it looks like a hack, to bypass all pipelines and read from some place in memory if a button is pressed to decide on the animation, but it gives the player a good feedback e.g. recoil of your pistol

3. decide on post effects right before you process those, if the player moved the stick to rotate the camera, involve that information in your post-motion-blur. (afaik OnLive is doing those kind of tricks in their streaming boxes).

 

one of the best known tricks of that kind was back then when we had to choose whether we use a nice, animated, colorful cursor rendered in software or the ugly black-white hardware cursor that was refreshed directly by the mouse driver that was triggered by an interrupt every time you moved the mouse. video-hardware overlayed the cursor on every refresh of the screen. playing an rts with 15fps (which was not uncommon back then) and a software cursor was often very annoying.



#11 3TATUK2   Members   -  Reputation: 730

Like
1Likes
Like

Posted 18 September 2013 - 12:58 PM

So I sort of just figured something out... Well, earlier I was kind of being ambiguous when I said "events & rendering" ... what I really meant was "(events+processing/updating) & rendering"

 

I just figured out, the "old" way where I have processing and rendering in the same thread, and say i restrict to 5 fps... the movement will only happen if the key is held down at the 5 FPS update moment... If you tap the key a bunch of times it basically only registers once or twice - when you hit it at the perfect moment. But, if I put "events+processing" into a separate thread than "rendering", i can still render at 5 FPS.. except movement happens regardless. I can tap the key a bunch of times and it registers and moves forward every time. So - yes, separating "events" from "processing+rendering" is pointless because the events are queue. Same reason separating "events" from "processing" is also pointless. But separating "events+processing" from "rendering" is definitely worth it. :)



#12 wintertime   Members   -  Reputation: 1800

Like
0Likes
Like

Posted 18 September 2013 - 01:57 PM

 

 


The main problem being that it is not synchronized with your in-game timer.
Out of interest (I've never done this) how do you deal with this issue tonemgub?

Is it possible to measure how far in the past an event occurred? From what I can tell, you can only measure the elapsed time between two different inputs.

That's great for cases where a key is pressed and released on the same frame (which happens and would be very important at 1fps, but is rare at 60fps), but when rendering, there doesn't seem to be a way to determine that, e.g. "this key was pressed 10ms before the render function"...

 

I once a few years ago experimented a bit and looked at what times I got from windows xp for the windows messages and IIRC they were near identical to what I got from GetTickCount() with a little variance.



#13 L. Spiro   Crossbones+   -  Reputation: 14026

Like
0Likes
Like

Posted 18 September 2013 - 05:43 PM

It would be more accurate (and frame-rate independent) to use the event's real time.

And no matter how much you time things trying to account for different hardware latencies, you will not be able to always get the constant frame rate that you plan for, so basing your event handler on frame-times is always a bad idea IMO.

While it is correct not to work based on frame times, 2 points are that you are not actually trying to account for hardware latencies and the event’s real time is not as important as the time between events.
For point #1, your input system’s goal is to account for latency more on a psychological level. If you are playing Guitar Hero and for whatever reason a frame lasts 1 second, the player has already mentally timed the next few button presses and can still hear the music to aid in his or her timing.
Your goal is to make sure that as long as the player hits the buttons at the correct times in real life, the game handles it as the correct timing of the buttons.
This is almost what you said, but the problem is in how you have implemented it, which I will explain below.

For point #2, the only thing that really matters is how much time has elapsed since each event, so it doesn’t matter if it is the event’s “real” time or a time based on your own arbitrary system, as long as the same delta times can be derived. This is important for knowing why GetMessageTime() is not the best solution inside a more fully matured game architecture. As will be explained below.
 
 

I once a few years ago experimented a bit and looked at what times I got from windows xp for the windows messages and IIRC they were near identical to what I got from GetTickCount() with a little variance.

GetTickCount() is extremely unsuitable for games for a myriad of reasons. It has a very large range of drift (up to 55 milliseconds) for one. Google has many other reasons.
http://www.garagegames.com/community/forums/viewthread/11901
http://www.mvps.org/directx/articles/selecting_timer_functions.htm
http://randomascii.wordpress.com/2013/05/09/timegettime-versus-gettickcount/ (note that he starts off with consistent results due to problems in the test code, which he explains later and ends up once again showing heavy inconsistencies with GetTickCount()).

Microsoft® even recommends never using GetTickCount() and instead using GetTickCount64() to avoid wrap-around issues.  Then there is the fact that it is at best accurate to a millisecond, when it is fairly standard to use microseconds in game timers these days.

If GetMessageTime() returns values similar to GetTickCount() then it is even more major reason not to use GetMessageTime().

Ultimately, there is no part of any of these timer function would we ever want to touch our game code.



So what is the solution?

I already explained it in detail here, but I will try explaining it again in context more directly related to this thread.
 
Firstly, we have already established that we don’t want to touch the above-mentioned timers.
You will always want to use QueryPerformanceCounter() for accuracy, and in modern games you want microsecond resolution.
You will want to make your own timer class and it should always count up from 0 to avoid wrap-around issues.  Again, I have gone into great detail on this as well.  Although that was for iOS, it is easy to port to use QueryPerformanceCounter() instead.
 
You will notice that the first thing that class does it get the real time.  It uses this to determine delta times between calls to Update(), starting at delta 0.
By copying that delta time to another timer object they are effectively synchronized.
 
This means you keep the main game timer, updated once per logical tick (a second timer is used for rendering and updated once every frame, and it acts as a slave to the main game timer), on the game thread and on the window thread (where you catch input events from the operating system) you have a synchronized timer to use for time-stamping, with microsecond resolution, input events.
That thread can avoid hogging resources by sitting in WaitMessage() the whole time.
Even if there is some latency before the thread is activated to catch the message and time-stamp it, it will unlikely be worse than the value returned by GetMessageTime()—it should be accurate to less than a millisecond.
 
The inputs are time-stamped via your custom timer and put into a thread-safe queue waiting to be read by the game.
 
 
Now you have inputs time-stamped more accurately and synchronized to a time system that is more appropriate for games.
 
 
 
Now your game enters the extreme worst case of 1 FPS.
If the logical update rate is 30 FPS, after 1 second of lag, it will perform 30 logical updates, each accounting for 33.3333 milliseconds of game time.
 
Each logical update should then read and process only 33.3333 milliseconds of input.
In this way, no matter what the situation is in the game world, input is read and handled consistently regardless of the FPS.
 
 
The difference between this and what tonemgub described is in the foundation.
You can’t use the above-mentioned Windows® timer functions for games at all, and for other reasons mature frameworks will need a custom timer system anyway.
Once you have that in place you have to create your own method for time-stamping events because you can’t synchronize your timers to the Windows® timers reliably (not that you would want to).

And once you stop using the Windows® timers and start time-stamping manually, your time-stamps can only be as accurate as the time you catch MK_LBUTTON, so you have to move your game to another thread and let the main thread do nothing but catch events as quickly as possible.
You can keep the main thread on a high priority so that when it is signaled to catch an event it will not be blocked by your game treads. While it is waiting for events it will be in a waiting state so its high priority will not interfere with the game thread.


The accurateness of the results of the player’s inputs is far more important to the player than visible latency, although of course we prefer to keep that down if possible.


L. Spiro


Edited by L. Spiro, 01 November 2013 - 01:53 AM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#14 fir   Members   -  Reputation: -460

Like
0Likes
Like

Posted 18 September 2013 - 10:53 PM

it doesn't make sense to render with a higher frequency than your monitor shows,

 

Imagine that you render a pong ball moving on the screen 

from eft to right (say a 1000 pixels) in the time of 1/10 second (this is fast but I think quite realistic physical speed)

 

Say you render it with 100 Hz, if so you render it ten times

when it moves from left to right with 100 pixel gaps. So

physically it is not a movement but series of jumpings /

teleportations - I doubt that eye interpolates that so I think

this popular belief that 30 fps or somethink like that is sufficient is not quite true (not precise)

 

if you will physically render it at 1000 Hzimo it should be 

better - you could also maybe not render a white dot but 

result of some optical suming of light in movement between

frame to frame (this is along of its 100 pixel way) - but this is 

I think somewhat hard to do correctly (because it probably 

should take into an account some physical characteristics 

of display - as this is some precise physical - optical work)

 

what with that? (this I wrote is an result of my thinking about

it done a couple a years ago, )


Edited by fir, 19 September 2013 - 12:34 AM.


#15 3TATUK2   Members   -  Reputation: 730

Like
0Likes
Like

Posted 18 September 2013 - 11:27 PM

fir, yes but Krypt0n is talking about the physical limit of the actual monitor hardware. Rendering unsynced to refresh rate will result in screen tearing... If your monitor had a refresh rate of 1000 Hz, then by all means - render at 1000 Hz. But, it doesn't.



#16 fir   Members   -  Reputation: -460

Like
0Likes
Like

Posted 19 September 2013 - 12:11 AM

fir, yes but Krypt0n is talking about the physical limit of the actual monitor hardware. Rendering unsynced to refresh rate will result in screen tearing... If your monitor had a refresh rate of 1000 Hz, then by all means - render at 1000 Hz. But, it doesn't.

 

Sure, but I was saying about topic if 30 or 60 hz is sufficient or not (did not read the rest of the thread, Im tired)

 

In this example of course I forgot to add it will be worse if you 

will render at 50 Hz then you got only five frames and 200 pixel length teleportation jumps.

 

Did you try to render something like line or fat line, some like a clock indicator or radar line, rounding fast 0-360 degrees in loop. How it look like for you. I was experimenting like that and it was looking like a 5 or 6 (in general n) arm unstable spider, which arms were somewhat fluorescent - I am not sure the cause of it, I used crt monitor set at 85 Hz and winapi for rendering, and rendering was probably at 500 - 1000 Hz not sure now.

 

how to render correctly such thing ? (not obtaining fluorescent spider)  - in real life you will not get fluorescent unstable spider - I think it would be probably like nice glowing (I was using blue line) propeller)

 

 

Interesting things was also about mouse cursor (indeed as i was saying already somewhere on my win xp the mouse cursor is the only element which responses ok in the whole this luggish system)

Also the input delay remarks are interesting (not read all) - sad is that i cannot precisely measure it but when i press arow down to roll down webpage content in opera I see some lag - i would say this is aboul 100 ms maybe - this is an internet lag scale and this is sad)


Edited by fir, 19 September 2013 - 12:31 AM.


#17 tonemgub   Members   -  Reputation: 1143

Like
1Likes
Like

Posted 23 September 2013 - 02:51 AM

@Hodgman: L. Spiro seems to have answered the question you asked.

 

I'll only add that it is possible to synchronise GetMessageTime and QueryPerformanceTimer on a single thread, and I do it by using the MsgWaitForMultipleObjects function inside the message loop.

 

I don't have the code in front of me now, and I can't remember the exact math behind it, but I currently use MsgWaitForMultipleObjects to time both frames and inputs. First, I compute the dwMilliseconds parameter so that MsgWaitForMultipleObjects waits until the next frame needs to be drawn minus one millisecond (or was it 15? IIRC, I use the smallest value that timeBeginPeriod accepts, minus 1). timeBeginPeriod affects the resolution of all the windows apis, except the performance-timers - it even affects the resolution of MsgWaitForMultipleObjects.

 

Anyway, when MsgWaitForMultipleObjects returns because it reached this wait-timeout limit, then I use QueryPerformanceCounter in a loop to synchronize the thread to the current frame time (which I compute using QueryPerformanceCounter - this is my main game timer) - this will consume that remaining 1 millisecond (or 15) that I subtracted from the call to MsgWaitForMultipleObjects. After that, all I do is Draw and Present the scene using the currently computed game state.

 

If however, MsgWaitForMultipleObjects returns because it detected a message being added to the message queue, then I do the regular GetMessage(or PeekMessage))/TranslateMessage/DispatchMessage stuff, and if there are any input messages, I re-compute the game state based on GetMessageTime, and here I also do a check to make sure that GetMessageTime is behind my main QueryPerformanceCounter-based timer - if it's not, then I just use the value of QueryPerformanceCounter instead.

 

Now, the reason Microsoft doesn't recommend using GetTickcount (or other GetMessageTime-like apis) is because of the timeBeginPeriod - once called, it affects all running processes. By default, it has a frequency of ~ 15 ms, but if another process calls timeBeginPeriod(1), this will be used everywhere i nthe system, even for all of the thread-synchronisation APIs, but if I use timeBeginPeriod myself, then I can be sure that it's precision is the one I set - if another process changes it, then it will probably be a video game that changes it to the same value as I need (the smallest period reported by timeGetDevCaps) - but to make sure, I could also jsut call it every time I enter my message loop, before MsgWaitForMultipleObjects, and then also call timeEndPeriod after MsgWaitForMultipleObjects, or at the end of the message loop (since Microsoft recommend this) - this will keep  the call to MsgWaitForMultipleObjects in high-precision mode, while not affecting the rest of the system (too much - ok, maybe it does affect it, but I don't care smile.png).

 

Now, about using a separate thread for timing input events - even then you have to implement some kind of sync for accessing that "input queue" that L. Spiro mentioned, and you are going to be doing this using one of the thread-sync'ing apis or objects (critical sections, events, mutexes, etc.) - but as I mentioned, unless you use timeBeginPeriod, these apis will all be in low-precision mode (or whatever precision was set by other processes), so you are still basically affected by the GetMessageTime "delay-effect" when switching between the input thread and the rendering thread... AndI think the basic message-queue processing apis GetMessage/PeekMessage are also affected, so even if you do use QueryPerformanceCounter, your input timer is still being delayed by the GetMessageTime "delay-effect".

 

And of course, if you use DirectInput/XInput with Event objects, the same low-precision affects the time when your Event gets signaled (it will only get signaled at a 15ms-boundary). But if you use DirectInput/XInput by Polling (maybe in a separate thread), then you're not afected.

 

NOTE: As expected, I still have an issue when using VSYNC with this method (but then again, VSYNC would delay ANY timing method that runs in the same thread), since I'm also doing my own frame-sync'ing, and VSYNC will interfere with my own timing - I'm currently looking for a way to get the VSYNC-time to plug-it in into my method, but if there isn't one, I think I can still use this method by always passing a 1ms timeout to MsgWaitForMultipleObjects, and move the scene-Draw part such that I can rely on DirectX's Present method to either "wait for the next vsync and Present" or "discard the current Present if nothing new was Drawn since the last Present". I've already tested this, and it adds at most a 15% CPU-use overhead, whereas the original frame-sync method does a near-0% CPU use (as shown in Task Manager smile.png ). Ideally, the time-out value passed to MsgWaitForMultipleObjects should be the smallest common-denominator between the VSYNC (monitor refresh) rate and the "minimum period" returned by timeGetDevCaps.

 

Note also that by "re-calculating the game-state" above, I mean simply re-calculating all of the movement-vectors and positions of objects affected by user input, as if they happen at the time returned by GetMessageTme. My project doesn't have that many user-controlled objects currently, so this can be done without exeeding the frame-time limit, so it doesn't cause problems like spikes in FPS or anything. For objects that move by themselves, without need for user input, I still calculate their movement at every frame (right after MsgWaitForMultipleObjects returns WAIT_TIMETOUT and just before starting the QueryPerformanceCounter frame-sync'ing loop). Calculations for collision detection and other object-to-object interractions are done whenever the state of any type of object changes (optimised, based on what type of object it is), so I can't use PhysX or other physics engines that rely on fixed timesteps, but if I had to,  I would probably just plug-in the PhysX timestep somewhere right after the QueryPerformanceCounter frame-sync'ing loop.


Edited by tonemgub, 23 September 2013 - 07:38 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS