# sleep c++

## Recommended Posts

well I have looked up the sleep() command on the net but have not found an answer, all I want to know is what #include command do I use to use the sleep command. I have tried dos.h and thread but they do not work. I know this is a very simple question but like I said I have already done some research but with no luck.

##### Share on other sites
What platform are you on? sleep is not a standard C++ function.

If you're on Windows, you could try reading the MDSN link for the Win32 sleep function, which explicitly tells you which header to #include and which Win32 library you need to link to: https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx

##### Share on other sites

The sleep() function is not very accurate.  Whatever you're doing, you're probably better off using OpenGL's timer functions.

##### Share on other sites

The sleep() function is not very accurate.  Whatever you're doing, you're probably better off using OpenGL's timer functions.

OpenGL itself is purely a graphics library, it doesn't have timing functionality. Are you referring to glutTimerFunc, which like glut itself is deprecated?

##### Share on other sites

Are you referring to glutTimerFunc, which like glut itself is deprecated?

I was. I didn't realize it had been deprecated as I never work with OpenGL. I guess some other cross-platform high precision timer library then.

##### Share on other sites
There is no standard solution, and as above, a high precision wait is not part of c++. The language offers a low-precision wait which is OS-specific and compiler-specific, but no high precision guarantees..

Sleep, or stopping execution for a brief time, is an operating-system specific action. The granularity is fairly rough, with the best results being on a special "real time operating system", which you are not. Most OS's it is easy to ask to be suspended and then resumed at the next scheduling cycle, which may be 10, 20, 50, or more milliseconds away. On Windows specifically, Sleep() might not even wake your process at the specified update, but it may be two or three or ten updates, or maybe even longer depending on what is going on with the rest of the system.

One of the most reliable ways to get a short-term delay is by "spinning", doing an operation over and over very quickly until the time passes. This is generally not recommended, since the faster the system the more compute cycles are blocked, the more energy is wasted (particularly bad on laptops), plus the process is more likely to be demoted in priority since the task scheduler will detect it is slowing down other processes. Very reliable, but with many negative side effects.

The next most reliable is to wait for an event from something else. It can be done by firing off a delayed operation then waiting for the system to return, called a "blocking operation". Games do this all the time. One of the easiest in games is to wait for the graphics frame buffer to be flipped. Since the game can't really draw anything else until the frame is done, it works well; draw the next frame then wait. Other blocking operations are network and disk commands. Sometimes games will launch multiple tasks and have each one blocking on different things. A server might be monitoring a network pool of perhaps 50 connections, and wake either when the regular scheduler wakes it up or as an event when network data is ready on any of the connections. However, since we're not talking about real time operating systems, even these operations tend to slip toward suspending too long. You might have a reason to sleep 320 microseconds but not wake up until 15,000 microseconds have passed.

The exact commands and methods to waiting for events depend on the operating system you are using and how precise you plan on being. For games, graphics page flipping and certain network events are the typical go-to devices.
(Sorry for the accidental downvote lennylen, mouse slipped. Upvoted the other reply, which is good, and hopefully the two will cancel out. I hope the upcoming site update will allow cancelling accidental votes.)

##### Share on other sites

I am going to take lenny's advice and use the glutTimerfunc() I know it is deprecated but it might still work for me. I tried the windows.h and synchapi.h include files but it still does not work with the sleep() function. thanks for all the help

##### Share on other sites

I am going to take lenny's advice and use the glutTimerfunc() I know it is deprecated but it might still work for me. I tried the windows.h and synchapi.h include files but it still does not work with the sleep() function. thanks for all the help

if your using c++, then you should be using what Ryan_001 said, std::this_thread::sleep_for is part of the standard library since c++11. you don't need to be using an outside api for this.

##### Share on other sites

actually since I am working with opengl and c++ I am going to use glutTimerFunc, instead of sleep() thanks for the input.

##### Share on other sites
Posted (edited)
for windows -
#include <windows.h>
VOID WINAPI Sleep(DWORD Milliseconds);

for linux
# include <unistd.h>
unsigned int sleep(unsigned int Seconds);


Notice though that they use different arguments.  In windows its in milliseconds, and in linux its in full seconds

In general, on windows, if you just need to keep the thread from blocking other processes, use Sleep(0);  which will immediately return control to the calling thread if no other threads or processes are waiting.  Any value higher than 0 will guarantee that control does not return to the calling thread PRIOR to that time elapsing, but there is no guarantee of it happening exactly at that time.  So e.g. Sleep (1000); would not return before 1000 milliseconds had elapsed, but it could very well not return for 5 seconds, or 5 days.  If you want to actually wait a specified time, e.g. being laptop battery friendly, you should instead use timer functions such as

UINT_PTR WINAPI SetTimer(...);

https://msdn.microsoft.com/en-us/library/windows/desktop/ms644906(v=vs.85).aspx

These can have better resolution, and callback timers in specific are virtually guaranteed to execute at the specified interval except on the most heavily overloaded system.

Edited by Cwhizard

##### Share on other sites
Posted (edited)

POSIX has usleep which can sleep for microseconds, although if the sleep is very short its probably just a pause loop.

std::this_thread::sleep_for(std::chrono::microseconds(usec));

in C++, which will be portable.. if that means anything to you

Edited by Kaptein

##### Share on other sites

If you want to use Sleep(), you need to include WinBase.h.

##### Share on other sites

actually I used windows.h and it works just fine

##### Share on other sites
1 hour ago, phil67rpg said:

actually I used windows.h and it works just fine

For future reference, when you're asking a question like this, it's best to tell us what you're trying to do. You'll get much better answers that way.

##### Share on other sites

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

##### Share on other sites
12 hours ago, phil67rpg said:

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

In theory, they can share the same low-level sleeping mechanism and behave in exactly the same way, they could be completely unrelated, or they could be similar except for some details or special cases that might or might not be relevant for you.

In practice, try both and measure actual sleeping time accuracy, on every platform you care about: sleeping is important enough to deserve some testing effort.

##### Share on other sites
14 hours ago, phil67rpg said:

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

Quote

The implementation of steady_clock has changed to meet the C++ Standard requirements for steadiness and monotonicity. steady_clock is now based on QueryPerformanceCounter() and high_resolution_clock is now a typedef for steady_clock. As a result, in Visual C++ steady_clock::time_point is now a typedef for chrono::time_point<steady_clock>; however, this is not necessarily the case for other implementations.

Which is good because QPF is the best timer on a Windows system, is quite stable and has nanosecond precision.  If you're curious as to the details this link here: https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx.  How  exactly sleep_for() and sleep_until() are implemented, I'm not sure.

Sleep() we do know uses the internal system clock interrupt, which fires every few milliseconds (7-15ms from what I understand on most systems).

Without some benchmarking I can't say for sure, but I'm certain that sleep_for() and sleep_until() will be at least as accurate as Sleep() and possibly better.

##### Share on other sites

So I threw together a little test benchmark.  On my system (Windows 7, Intel i7) both sleep_for() and Sleep() were identical.

When VS (and a few other programs in the background) were open I was getting 1ms accuracy for both (+/- 10us or so).  So both performed well.  With everything closed though that number jumped up and would vary between 5ms and 15ms; but whatever the number was for sleep_for(), Sleep() was identical.  So either VS or some other program in the background was calling timeBeingPeriod(1).  It seems either way, under the hood, sleep_for() is identical to Sleep() on Windows 7 with VS 2017.

##### Share on other sites

I learnt about sleep(); in class too. Remember we just plug in the time in ms.

Need to use #include<time.h> as I remember correctly lol been a while.

##### Share on other sites

For Windows the task scheduler wakes processes up from Sleep at 15.6 ms intervals. Every 15.6 milliseconds it runs through the list and wakes them up in order of priority.  This has been the case for decades, and an enormous body of programs rely on that.

On 6/25/2017 at 10:16 AM, Ryan_001 said:

On my system (Windows 7, Intel i7) both sleep_for() and Sleep() were identical. ... I was getting 1ms accuracy for both (+/- 10us or so).

The only way that should be true is if you happened to sleep for the same amount remaining for the wake interval, or if you (or some other process) altered the system's scheduling frequency.  While it is possible to adjust the scheduler's frequency to 1ms it has serious negative implications, such as greatly increasing power consumption and CPU load as other processes are running 15x more often, and can break a number of other applications that are expecting the standard frequency.

Google Chrome did that for a time, and it had some enormous backlash due to how it broke other systems. Back in late 2012 a creative Chrome developer realized that if he dropped the system's scheduler frequency to 1ms then Chrome would respond much faster to certain events. It was listed as a low-priority bug, until someone did some research that hit global media. Turns out the bug cost about 10 megawatts continuously, hurt laptop battery drain rates by 10%-40% depending on battery type, reduced available CPU processing generally by 2.5% to 5%, and broke a long list of applications.  After enormous backlash they realized that they needed to fix the bug, and finally did so.

If your Sleep on Windows is not waking at a 15.6 millisecond interaval then you need to fix your code so it works with that value.

##### Share on other sites
On 6/25/2017 at 11:16 AM, Ryan_001 said:

With everything closed though that number jumped up and would vary between 5ms and 15ms; but whatever the number was for sleep_for(), Sleep() was identical.  So either VS or some other program in the background was calling timeBeingPeriod(1).

The benchmark wasn't to verify the task scheduler on Windows, it was to determine whether sleep_for() did the same thing as Sleep() or did something different.  I found that at all times, background programs open or closed, irregardless of the sleep amount, that sleep_for() and Sleep() returned identical results.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627726
• Total Posts
2978816
• ### Similar Content

• Using my loop based on this: https://gafferongames.com/post/fix_your_timestep/
Trying to get my game to run at fixed 60FPS (both update and render) for all machines. Studied the link above and have been stuck on this game loop for weeks trying to get it to work smoothly to glide this image across the screen. I had dealt constantly with jittering and possible tearing. I can't recall what I did to fix it exactly, but I believe it may have something to do with not rounding a variable properly (such as delta).

So yeah, currently the loop works but I'm afraid as I develop the game more and have to render more, eventually something I'm doing in my loop could cause slowdowns or larger CPU usage. Does the structure of the game loop below seem okay or is there something I can do to optimize it?
The 2D game is a generic sidescroller. Not too heavy on physics, mainly just simple platformer physics.

void Game::mainLoop() { double fps = 60.0f; int frameSkip = 5; int deltaSkip = frameSkip; double miliPerFrame = 1000.0 / fps; double xx = 0.0f; double playSpeed = 5; Uint64 previous = SDL_GetPerformanceCounter(); double accumulator = 0.0f; bool shouldRender = false; bool running = true; while(running){ Uint64 current = SDL_GetPerformanceCounter(); double elapsed = (current-previous) * 1000; elapsed = (double) (elapsed / SDL_GetPerformanceFrequency() ); previous = current; // handleEvents() handleEvents(); // when we press escape reset x to 0 to keep testing // when he goes off screen if(key_states[SDL_SCANCODE_ESCAPE]) xx = 0; accumulator+=elapsed; if(accumulator >= miliPerFrame * frameSkip) accumulator = 0; shouldRender = accumulator >= miliPerFrame; while(accumulator >= miliPerFrame){ // update() //cout << playSpeed << endl; double delta = ceil(elapsed); if(delta > deltaSkip) delta = 1; //if(elapsed >= 1) delta = elapsed; xx+= playSpeed * delta;// * (1 / fps); // /update() accumulator -= miliPerFrame; //get what's left over } if(shouldRender){ // render() SDL_SetRenderDrawColor(gameRenderer, 0xFF, 0xFF, 0xFF, 0xFF); SDL_RenderClear(gameRenderer); imageController.drawImage("colorkeytest", floor(xx), 0); SDL_RenderPresent(gameRenderer); // /render() } } }
• By SR D
I've been learning how to do vertex buffers plus index buffers using Ogre, but I believe this is mostly the same across several engines. I have this question about using vertex buffers + index buffers.
Using DynamicGeometryGameState (from Ogre) as an example, I noticed that when drawing the cubes, they were programmatically drawn in order within the createIndexBuffer() function like so ...

const Ogre::uint16 c_indexData[3 * 2 * 6] = { 0, 1, 2, 2, 3, 0, //Front face 6, 5, 4, 4, 7, 6, //Back face 3, 2, 6, 6, 7, 3, //Top face 5, 1, 0, 0, 4, 5, //Bottom face 4, 0, 3, 3, 7, 4, //Left face 6, 2, 1, 1, 5, 6, //Right face };
From the above, the front face is drawn using the vertices 0, 1, 2, 2, 3, 0. But when reading in thousands of vertices from a file, one obviously doesn't code an array specifying which vertices make up a face.
So how is this done when working with a large number of vertices?
• By Josheir
I am working on a SFML c++ program that uses two rendering windows passed from main to the function drawmessage in a class textmessage.  I was using the second window for displaying debug information that is displayed because I could not get the appropriate information from the SFML object.
With that said, here is the part of that function that works the first time through and does not on the second usage.  I really have changed the code to try and get it working.   For example I created the two objects locally here for testing.  I am sorry about the extra commented statements they help convey the message too.
There is the same problem though, the statement :     string test =     message_holder10.getString(); is working and shows "asty" on every run.  On the first run of the program there is a display of the text correctly however on the second call there is no display of it.  (I am stepping through until the display command.)
I feel like I have exhausted my tries so I am asking for help please.
If it is the font I will just die, I really don't think it is.

sf::Text message_holder10;
sf::RenderWindow windowtype3(sf::VideoMode(700, 1000), "a");

if ((space_is_used && on_last_line) || (space_is_used && ((line_number) == (total_lines - 2))))
{

//string temp_string = message::Get_Out_Bound_String();
//int length_of_string = temp_string.length();
sf::Font Fontforscore;
if (gflag == 0)
{
gflag = 1;

{
exit(1);
}

message_holder10.setFont(Fontforscore);
message_holder10.setCharacterSize(100);
message_holder10.setFillColor(sf::Color::Red);
message_holder10.setOrigin(0, 0);
message_holder10.setPosition(0, 0);
windowtype2.close();
}
message_holder10.setString("asty");

//int y_for_space = display_y_setting + (total_lines - 2) * each_vertical_offset_is;
//int this_width = 0;

//float x = message_holder.getLocalBounds().width;

//message_holder.setPosition( ( (first_width - x )/2), y_for_space);

//windowtype2.close();

string test =     message_holder10.getString();

windowtype3.clear();
windowtype3.draw(message_holder10);
windowtype3.display();

//windowtype.display();

Wait_For_Space_Press();

/////////////////////////

Before, the :      windowtype3.display()  without the clear was drawing other text in this call, just not this one particular text message with it!

Thank you so much I am wondering what it can be,

Josheir
• By Tispe
Hi
I want to test out a polymorphic entity component system where the idea is that the components of an entity are "compositioned" using templated multiple inheritance. But I am running into an issue because I am stacking a bunch of methods with the same names inside a class (but they have different signatures). I want these methods to be overloaded by the template type but my compiler says the access is ambiguous. I have issues making them unambiguous with the using declaration because the paramter pack expansion causes a syntax error.
Can anyone here give me some advice on this?

template <class T> class component { T m_data; protected: component() {}; ~component() {}; public: void set(const T& data) { m_data = data; }; }; template <class ...Ts> class entity : public component<Ts>... { public: entity() {}; ~entity() {}; //using component<Ts>::set...; // syntax error }; struct position { float x{}; float y{}; float z{}; }; struct velocity { float x{}; float y{}; float z{}; }; int main() { entity<position, velocity> myEntity; position pos = { 1.0f, 1.0f, 1.0f }; velocity vel = { 2.0f, 2.0f, 2.0f }; myEntity.set(pos); // error C2385: ambiguous access of 'set' //myEntity.set(vel); return 0; }
• By Baemz
Hello,
I've been working on some culling-techniques for a project. We've built our own engine so pretty much everything is built from scratch. I've set up a frustum with the following code, assuming that the FOV is 90 degrees.
float angle = CU::ToRadians(45.f); Plane<float> nearPlane(Vector3<float>(0, 0, aNear), Vector3<float>(0, 0, -1)); Plane<float> farPlane(Vector3<float>(0, 0, aFar), Vector3<float>(0, 0, 1)); Plane<float> right(Vector3<float>(0, 0, 0), Vector3<float>(angle, 0, -angle)); Plane<float> left(Vector3<float>(0, 0, 0), Vector3<float>(-angle, 0, -angle)); Plane<float> up(Vector3<float>(0, 0, 0), Vector3<float>(0, angle, -angle)); Plane<float> down(Vector3<float>(0, 0, 0), Vector3<float>(0, -angle, -angle)); myVolume.AddPlane(nearPlane); myVolume.AddPlane(farPlane); myVolume.AddPlane(right); myVolume.AddPlane(left); myVolume.AddPlane(up); myVolume.AddPlane(down); When checking the intersections I am using a BoundingSphere of my models, which is calculated by taking the average position of all vertices and then choosing the furthest distance to a vertex for radius. The actual intersection test looks like this, where the "myFrustum90" is the actual frustum described above.
The orientationInverse is the viewMatrix in this case.
bool CFrustum::Intersects(const SFrustumCollider& aCollider) { CU::Vector4<float> position = CU::Vector4<float>(aCollider.myCenter.x, aCollider.myCenter.y, aCollider.myCenter.z, 1.f) * myOrientationInverse; return myFrustum90.Inside({ position.x, position.y, position.z }, aCollider.myRadius); } The Inside() function looks like this.
template <typename T> bool PlaneVolume<T>::Inside(Vector3<T> aPosition, T aRadius) const { for (unsigned short i = 0; i < myPlaneList.size(); ++i) { if (myPlaneList[i].ClassifySpherePlane(aPosition, aRadius) > 0) { return false; } } return true; } And this is the ClassifySpherePlane() function. (The plane is defined as a Vector4 called myABCD, where ABC is the normal)
template <typename T> inline int Plane<T>::ClassifySpherePlane(Vector3<T> aSpherePosition, float aSphereRadius) const { float distance = (aSpherePosition.Dot(myNormal)) - myABCD.w; // completely on the front side if (distance >= aSphereRadius) { return 1; } // completely on the backside (aka "inside") if (distance <= -aSphereRadius) { return -1; } //sphere intersects the plane return 0; }
Please bare in mind that this code is not optimized nor well-written by any means. I am just looking to get it working.
The result of this culling is that the models seem to be culled a bit "too early", so that the culling is visible and the models pops away.
How do I get the culling to work properly?
I have tried different techniques but haven't gotten any of them to work.
If you need more code or explanations feel free to ask for it.

Thanks.

• 10
• 9
• 21
• 14
• 12