Sign in to follow this  

sleep c++

Recommended Posts

well I have looked up the sleep() command on the net but have not found an answer, all I want to know is what #include command do I use to use the sleep command. I have tried dos.h and thread but they do not work. I know this is a very simple question but like I said I have already done some research but with no luck.

Share this post


Link to post
Share on other sites
What platform are you on? sleep is not a standard C++ function.

If you're on Windows, you could try reading the MDSN link for the Win32 sleep function, which explicitly tells you which header to #include and which Win32 library you need to link to: https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx

Share this post


Link to post
Share on other sites

The sleep() function is not very accurate.  Whatever you're doing, you're probably better off using OpenGL's timer functions.


OpenGL itself is purely a graphics library, it doesn't have timing functionality. Are you referring to glutTimerFunc, which like glut itself is deprecated?

Share this post


Link to post
Share on other sites

Are you referring to glutTimerFunc, which like glut itself is deprecated?

I was. I didn't realize it had been deprecated as I never work with OpenGL. I guess some other cross-platform high precision timer library then.

Share this post


Link to post
Share on other sites
There is no standard solution, and as above, a high precision wait is not part of c++. The language offers a low-precision wait which is OS-specific and compiler-specific, but no high precision guarantees..

Sleep, or stopping execution for a brief time, is an operating-system specific action. The granularity is fairly rough, with the best results being on a special "real time operating system", which you are not. Most OS's it is easy to ask to be suspended and then resumed at the next scheduling cycle, which may be 10, 20, 50, or more milliseconds away. On Windows specifically, Sleep() might not even wake your process at the specified update, but it may be two or three or ten updates, or maybe even longer depending on what is going on with the rest of the system.

One of the most reliable ways to get a short-term delay is by "spinning", doing an operation over and over very quickly until the time passes. This is generally not recommended, since the faster the system the more compute cycles are blocked, the more energy is wasted (particularly bad on laptops), plus the process is more likely to be demoted in priority since the task scheduler will detect it is slowing down other processes. Very reliable, but with many negative side effects.

The next most reliable is to wait for an event from something else. It can be done by firing off a delayed operation then waiting for the system to return, called a "blocking operation". Games do this all the time. One of the easiest in games is to wait for the graphics frame buffer to be flipped. Since the game can't really draw anything else until the frame is done, it works well; draw the next frame then wait. Other blocking operations are network and disk commands. Sometimes games will launch multiple tasks and have each one blocking on different things. A server might be monitoring a network pool of perhaps 50 connections, and wake either when the regular scheduler wakes it up or as an event when network data is ready on any of the connections. However, since we're not talking about real time operating systems, even these operations tend to slip toward suspending too long. You might have a reason to sleep 320 microseconds but not wake up until 15,000 microseconds have passed.

The exact commands and methods to waiting for events depend on the operating system you are using and how precise you plan on being. For games, graphics page flipping and certain network events are the typical go-to devices.
(Sorry for the accidental downvote lennylen, mouse slipped. Upvoted the other reply, which is good, and hopefully the two will cancel out. I hope the upcoming site update will allow cancelling accidental votes.)

Share this post


Link to post
Share on other sites

I am going to take lenny's advice and use the glutTimerfunc() I know it is deprecated but it might still work for me. I tried the windows.h and synchapi.h include files but it still does not work with the sleep() function. thanks for all the help

Share this post


Link to post
Share on other sites

I am going to take lenny's advice and use the glutTimerfunc() I know it is deprecated but it might still work for me. I tried the windows.h and synchapi.h include files but it still does not work with the sleep() function. thanks for all the help


if your using c++, then you should be using what Ryan_001 said, std::this_thread::sleep_for is part of the standard library since c++11. you don't need to be using an outside api for this.

Share this post


Link to post
Share on other sites
Posted (edited)
for windows -
#include <windows.h>
VOID WINAPI Sleep(DWORD Milliseconds);


for linux 
# include <unistd.h>
unsigned int sleep(unsigned int Seconds);

Notice though that they use different arguments.  In windows its in milliseconds, and in linux its in full seconds

 

In general, on windows, if you just need to keep the thread from blocking other processes, use Sleep(0);  which will immediately return control to the calling thread if no other threads or processes are waiting.  Any value higher than 0 will guarantee that control does not return to the calling thread PRIOR to that time elapsing, but there is no guarantee of it happening exactly at that time.  So e.g. Sleep (1000); would not return before 1000 milliseconds had elapsed, but it could very well not return for 5 seconds, or 5 days.  If you want to actually wait a specified time, e.g. being laptop battery friendly, you should instead use timer functions such as 

UINT_PTR WINAPI SetTimer(...);

https://msdn.microsoft.com/en-us/library/windows/desktop/ms644906(v=vs.85).aspx

 

These can have better resolution, and callback timers in specific are virtually guaranteed to execute at the specified interval except on the most heavily overloaded system.  

Edited by Cwhizard

Share this post


Link to post
Share on other sites
Posted (edited)

POSIX has usleep which can sleep for microseconds, although if the sleep is very short its probably just a pause loop.

If you are using a non-decade-old compiler you'll also have access to:

std::this_thread::sleep_for(std::chrono::microseconds(usec));

in C++, which will be portable.. if that means anything to you

Edited by Kaptein

Share this post


Link to post
Share on other sites
1 hour ago, phil67rpg said:

actually I used windows.h and it works just fine

For future reference, when you're asking a question like this, it's best to tell us what you're trying to do. You'll get much better answers that way.

Share this post


Link to post
Share on other sites
12 hours ago, phil67rpg said:

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

In theory, they can share the same low-level sleeping mechanism and behave in exactly the same way, they could be completely unrelated, or they could be similar except for some details or special cases that might or might not be relevant for you.

In practice, try both and measure actual sleeping time accuracy, on every platform you care about: sleeping is important enough to deserve some testing effort.

Share this post


Link to post
Share on other sites
14 hours ago, phil67rpg said:

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

https://msdn.microsoft.com/en-us/library/hh874757.aspx says: 

Quote

The implementation of steady_clock has changed to meet the C++ Standard requirements for steadiness and monotonicity. steady_clock is now based on QueryPerformanceCounter() and high_resolution_clock is now a typedef for steady_clock. As a result, in Visual C++ steady_clock::time_point is now a typedef for chrono::time_point<steady_clock>; however, this is not necessarily the case for other implementations.

Which is good because QPF is the best timer on a Windows system, is quite stable and has nanosecond precision.  If you're curious as to the details this link here: https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx.  How  exactly sleep_for() and sleep_until() are implemented, I'm not sure.

Sleep() we do know uses the internal system clock interrupt, which fires every few milliseconds (7-15ms from what I understand on most systems).

Without some benchmarking I can't say for sure, but I'm certain that sleep_for() and sleep_until() will be at least as accurate as Sleep() and possibly better.

Share this post


Link to post
Share on other sites

So I threw together a little test benchmark.  On my system (Windows 7, Intel i7) both sleep_for() and Sleep() were identical.

When VS (and a few other programs in the background) were open I was getting 1ms accuracy for both (+/- 10us or so).  So both performed well.  With everything closed though that number jumped up and would vary between 5ms and 15ms; but whatever the number was for sleep_for(), Sleep() was identical.  So either VS or some other program in the background was calling timeBeingPeriod(1).  It seems either way, under the hood, sleep_for() is identical to Sleep() on Windows 7 with VS 2017.

Share this post


Link to post
Share on other sites

For Windows the task scheduler wakes processes up from Sleep at 15.6 ms intervals. Every 15.6 milliseconds it runs through the list and wakes them up in order of priority.  This has been the case for decades, and an enormous body of programs rely on that.

On 6/25/2017 at 10:16 AM, Ryan_001 said:

On my system (Windows 7, Intel i7) both sleep_for() and Sleep() were identical. ... I was getting 1ms accuracy for both (+/- 10us or so).

The only way that should be true is if you happened to sleep for the same amount remaining for the wake interval, or if you (or some other process) altered the system's scheduling frequency.  While it is possible to adjust the scheduler's frequency to 1ms it has serious negative implications, such as greatly increasing power consumption and CPU load as other processes are running 15x more often, and can break a number of other applications that are expecting the standard frequency.

Google Chrome did that for a time, and it had some enormous backlash due to how it broke other systems. Back in late 2012 a creative Chrome developer realized that if he dropped the system's scheduler frequency to 1ms then Chrome would respond much faster to certain events. It was listed as a low-priority bug, until someone did some research that hit global media. Turns out the bug cost about 10 megawatts continuously, hurt laptop battery drain rates by 10%-40% depending on battery type, reduced available CPU processing generally by 2.5% to 5%, and broke a long list of applications.  After enormous backlash they realized that they needed to fix the bug, and finally did so.

If your Sleep on Windows is not waking at a 15.6 millisecond interaval then you need to fix your code so it works with that value.

Share this post


Link to post
Share on other sites
On 6/25/2017 at 11:16 AM, Ryan_001 said:

With everything closed though that number jumped up and would vary between 5ms and 15ms; but whatever the number was for sleep_for(), Sleep() was identical.  So either VS or some other program in the background was calling timeBeingPeriod(1).

The benchmark wasn't to verify the task scheduler on Windows, it was to determine whether sleep_for() did the same thing as Sleep() or did something different.  I found that at all times, background programs open or closed, irregardless of the sleep amount, that sleep_for() and Sleep() returned identical results.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628738
    • Total Posts
      2984464
  • Similar Content

    • By Josheir
      void update() { if (thrust) { dx += cos(angle*DEGTORAD)*.02; dy += sin(angle*DEGTORAD)*.02; } else { dx*=0.99; dy*=0.99; } int maxSpeed = 15; float speed = sqrt(dx*dx+dy*dy); if (speed>maxSpeed) { dx *= maxSpeed/speed; dy *= maxSpeed/speed; } x+=dx; y+=dy; . . . } In the above code, why is maxSpeed being divided by the speed variable.  I'm stumped.
       
      Thank you,
      Josheir
    • By Benjamin Shefte
      Hey there,  I have this old code im trying to compile using GCC and am running into a few issues..
      im trying to figure out how to convert these functions to gcc
      static __int64 MyQueryPerformanceFrequency() { static __int64 aFreq = 0; if(aFreq!=0) return aFreq; LARGE_INTEGER s1, e1, f1; __int64 s2, e2, f2; QueryPerformanceCounter(&s1); s2 = MyQueryPerformanceCounter(); Sleep(50); e2 = MyQueryPerformanceCounter(); QueryPerformanceCounter(&e1); QueryPerformanceFrequency(&f1); double aTime = (double)(e1.QuadPart - s1.QuadPart)/f1.QuadPart; f2 = (e2 - s2)/aTime; aFreq = f2; return aFreq; } void PerfTimer::GlobalStart(const char *theName) { gPerfTimerStarted = true; gPerfTotalTime = 0; gPerfTimerStartCount = 0; gPerfElapsedTime = 0; LARGE_INTEGER anInt; QueryPerformanceCounter(&anInt); gPerfResetTick = anInt.QuadPart; } /////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////// void PerfTimer::GlobalStop(const char *theName) { LARGE_INTEGER anInt; QueryPerformanceCounter(&anInt); LARGE_INTEGER aFreq; QueryPerformanceFrequency(&aFreq); gPerfElapsedTime = (double)(anInt.QuadPart - gPerfResetTick)/aFreq.QuadPart*1000.0; gPerfTimerStarted = false; }  
      I also tried converting this function (original function is the first function below and my converted for gcc function is under that) is this correct?:
      #if defined(WIN32) static __int64 MyQueryPerformanceCounter() { // LARGE_INTEGER anInt; // QueryPerformanceCounter(&anInt); // return anInt.QuadPart; #if defined(WIN32) unsigned long x,y; _asm { rdtsc mov x, eax mov y, edx } __int64 result = y; result<<=32; result|=x; return result; } #else static __int64 MyQueryPerformanceCounter() { struct timeval t1, t2; double elapsedTime; // start timer gettimeofday(&t1, NULL); Sleep(50); // stop timer gettimeofday(&t2, NULL); // compute and print the elapsed time in millisec elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms return elapsedTime; } #endif Any help would be appreciated, Thank you!
    • By mister345
      Hi, I'm building a game engine using DirectX11 in c++.
      I need a basic physics engine to handle collisions and motion, and no time to write my own.
      What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems. 
      I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
      The simpler the better. Please let me know, thanks!
    • By lawnjelly
      It comes that time again when I try and get my PC build working on Android via Android Studio. All was going swimmingly, it ran in the emulator fine, but on my first actual test device (Google Nexus 7 2012 tablet (32 bit ARM Cortex-A9, ARM v7A architecture)) I was getting a 'SIGBUS illegal alignment' crash.
      My little research has indicated that while x86 is fine with loading 16 / 32 / 64 bit values from any byte address in memory, the earlier ARM chips may need data to be aligned to the data size. This isn't a massive problem, and I see the reason for it (probably faster, like SIMD aligned loads, and simpler for the CPU). I probably have quite a few of these, particular in my own byte packed file formats. I can adjust the exporter / formats so that they are using the required alignment.
      Just to confirm, if anyone knows this, is it all 16 / 32 / 64 bit accesses that need to be data size aligned on early android devices? Or e.g. just 64 bit size access? 
      And is there any easy way to get the compiler to spit out some kind of useful information as to the alignment of each member of a struct / class, so I can quickly pin down the culprits?
      The ARM docs (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html) suggest another alternative is using a __packed qualifier. Anyone used this, is this practical?
    • By Josheir
      In the following code:

       
      Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }  
      I am understanding that a 90 degree shift results in a change like:   
      xNew = -y
      yNew = x
       
      Could someone please explain how the two additions and subtractions of the p.x and p.y works?
       
      Thank you,
      Josheir
  • Popular Now