Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 25 Jul 2004
Offline Last Active Jan 03 2014 04:10 PM

Topics I've Started

Idle Time: Profiler vs Taskmanager

27 October 2010 - 09:37 AM

Hello all,

Forgive the long post, but there is simply a lot that needs to be explained. If you are looking for my question, skip to the last paragraph. :)

For a project of mine I recently started working on, I wanted to properly handle timing in the rendering/game loop. I was looking for a clean and efficient way to retain optimal performance, compared to using a simple while-loop with a high-precision clock, while doing multi-threading between rendering and the game logic and allowing the CPU to idle as much as possible.

The solution I found is the usage of high-precision timers. While looking for a cross-platform implementation of one, that can also be used in a multi-threaded environment, I found the deadline_timer of boost::asio. This timer seems to work with a time-out of sockets internally, but that is hardly noticeable when you use it, especially when wrapped in a class. The idea is that it just stalls for as long as you tell it to. Nothing periodic with a constant timeout. It seems ideal for a graphics loop, where you are constantly waiting a variable amount time for the next vsync.

I did some benchmarks with the deadline timer, and found that, if I request a change in the OS's scheduling time (to about 1 or 2 ms), the performance of the deadline timer is excellent and consistent. My CPU is practically completely asleep and the time intervals are as requested.

Then I implemented the timer in my graphics loop. Basically, my project creates a separate thread for OpenGL to work in. This thread then runs a loop which does the following in order:
- Lock access to the scene data
- Render the scene data
- Unlock access to the scene data
- Wait for the next vsync (expected after 1/60 s, or a multiple if rendering took a long time)
- Swap the buffers

I then had some trouble with the Desktop Window Manager, but that's a different story. I actively disabled it and things started to work as expected again. :)

The result works absolutely fine. I tried moving a vertical line across the screen to check for tearing and frame skipping and found neither was present at all. The main problem was that task manager was showing my CPU at 50% (dual core), while some more testing showed the timer was certainly doing a lot of stalling.

I tried running a profiler with my program (the one from Visual Studio 2010) to see what my CPU was so very busy with. The profiler showed that most of my time was indeed spent under the function sleep() and it was showing a lot of idle time. Compared to skipping the waiting of the timer, which causes the waiting for vsync to take up full CPU, the difference was quite clear.

This finally brings me to my question. How can I tell whether my program really allows my CPU to sleep or whether it claims it completely? I suspect the profiler is right, but why is task manager unable to see this?

I'm also open to any suggestions regarding my approach. If you have any feedback or questions, let me know.

Thanks in advance, even if just for reading.

[Edited by - Ignifex on October 31, 2010 12:26:05 PM]

Feedback on RPG styled music

26 February 2010 - 04:17 AM

Hi all, I'm posting some of my recent musical work here for you to enjoy and, hopefully, to receive some feedback. Criticism is very welcome, but I would also be happy simply to hear what you think or whether you liked it. This is just a selection of my recent work, most of them related to games in their own way. Most of the pieces aren't exactly finished, mainly since they just end at some point, but they should provide enough to satisfy most of you.
  • The Duke's Command
  • Into The Arena
  • Foreseeing The Future
  • The Greatest Trial To Face Let me know what you think. I hope you enjoy it. :) Regards, Ignifex [Edited by - Ignifex on March 4, 2010 9:42:41 AM]

  • gameloop timing with SwapBuffers

    12 October 2008 - 05:00 AM

    Hello, I recently started looking into timers for my gameloop, using WinAPI with performance counters and the multimedia timer and OpenGL. Using timers seems to be the neat way to make a gameloop. My timer passes a message to my window every 1 ms. With every message, which I called ticks, I update my game logic and perform rendering or buffer swapping as needed. The problem I am currently running into is finding the right time to swap my buffers when VSync is enabled. As I understand it so far, when VSync is enabled, SwapBuffers will block until the hardware has done the swapping, which is when the display device is ready to receive the new video information. Therefore, I should be able to get the time right after my last call to SwapBuffers and use this to approximate the next time the buffers will be swapped by adding 1/60 s. I then subtract about 1-2 ms to ensure no frame is missed and use this value to find when to call SwapBuffers again. This runs just fine for about 2 seconds, while my loop starts spending more and more time calling SwapBuffers, even though I am recomputing the time with every call. After a while, every tick ends up calling SwapBuffers, setting the amount of ticks per second to 60 instead of 1000. Is there something I don't know about performance counters, multimedia timers or the SwapBuffers function that might cause this behaviour? I am quite sure the implementation of my idea is correct. Increasing the value of 1-2 ms to 3 ms, makes it slow down more quickly, probably since it makes it spend more time on the swap call. I have also tried approximating the time for the next bufferswap by simply incrementing by 1/60 s, instead of getting the time after the swap and adding 1/60 s. The behaviour is the same, although it remains stable for a longer time. Any ideas are appreciated. Thank you in advance and sorry for the long post. Ignifex [Edited by - Ignifex on October 13, 2008 7:07:55 AM]

    Frustrum deprecated

    17 August 2008 - 02:09 AM

    Hi, I've been playing around with OpenGL 3.0 a bit since its release, trying to work my way around the deprecated features. I understand that the fixed pipeline was practically deprecated, so everything should be drawn using shaders. One thing I stumbled upon was the deprecated projection and modelview matrices, including the Frustrum and Translate/Rotate calls. What is now the best way to get the view transform matrix (modelview * projection) into the vertex shader? I know how to compute it myself, but having to do so wouldn't be very userfriendly. Regards, Ignifex

    Music for RPG

    01 September 2005 - 07:19 AM

    Hi, I'm writing the music for a singleplayer RPG and would like to know what people think. My aim is mainly orchestral or simpler songs. So here's the list: - Bordura (village theme) - Battle Theme (updated) - Different (sad theme) - The Council (working on this one) - Travel (theme) - Travel 2 (military travelling theme) - Ending Theme (FMV, sad) - Under Siege (FMV war music) - Ancient War (FMV music, possible for intro) So, how is it? [Edited by - Ignifex on October 29, 2005 4:00:28 PM]