Sign in to follow this  
phil67rpg

sleep c++

Recommended Posts

phil67rpg    443

well I have looked up the sleep() command on the net but have not found an answer, all I want to know is what #include command do I use to use the sleep command. I have tried dos.h and thread but they do not work. I know this is a very simple question but like I said I have already done some research but with no luck.

Share this post


Link to post
Share on other sites
Oberon_Command    6075
What platform are you on? sleep is not a standard C++ function.

If you're on Windows, you could try reading the MDSN link for the Win32 sleep function, which explicitly tells you which header to #include and which Win32 library you need to link to: https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx

Share this post


Link to post
Share on other sites
Oberon_Command    6075

The sleep() function is not very accurate.  Whatever you're doing, you're probably better off using OpenGL's timer functions.


OpenGL itself is purely a graphics library, it doesn't have timing functionality. Are you referring to glutTimerFunc, which like glut itself is deprecated?

Share this post


Link to post
Share on other sites
LennyLen    5715

Are you referring to glutTimerFunc, which like glut itself is deprecated?

I was. I didn't realize it had been deprecated as I never work with OpenGL. I guess some other cross-platform high precision timer library then.

Share this post


Link to post
Share on other sites
frob    44904
There is no standard solution, and as above, a high precision wait is not part of c++. The language offers a low-precision wait which is OS-specific and compiler-specific, but no high precision guarantees..

Sleep, or stopping execution for a brief time, is an operating-system specific action. The granularity is fairly rough, with the best results being on a special "real time operating system", which you are not. Most OS's it is easy to ask to be suspended and then resumed at the next scheduling cycle, which may be 10, 20, 50, or more milliseconds away. On Windows specifically, Sleep() might not even wake your process at the specified update, but it may be two or three or ten updates, or maybe even longer depending on what is going on with the rest of the system.

One of the most reliable ways to get a short-term delay is by "spinning", doing an operation over and over very quickly until the time passes. This is generally not recommended, since the faster the system the more compute cycles are blocked, the more energy is wasted (particularly bad on laptops), plus the process is more likely to be demoted in priority since the task scheduler will detect it is slowing down other processes. Very reliable, but with many negative side effects.

The next most reliable is to wait for an event from something else. It can be done by firing off a delayed operation then waiting for the system to return, called a "blocking operation". Games do this all the time. One of the easiest in games is to wait for the graphics frame buffer to be flipped. Since the game can't really draw anything else until the frame is done, it works well; draw the next frame then wait. Other blocking operations are network and disk commands. Sometimes games will launch multiple tasks and have each one blocking on different things. A server might be monitoring a network pool of perhaps 50 connections, and wake either when the regular scheduler wakes it up or as an event when network data is ready on any of the connections. However, since we're not talking about real time operating systems, even these operations tend to slip toward suspending too long. You might have a reason to sleep 320 microseconds but not wake up until 15,000 microseconds have passed.

The exact commands and methods to waiting for events depend on the operating system you are using and how precise you plan on being. For games, graphics page flipping and certain network events are the typical go-to devices.
(Sorry for the accidental downvote lennylen, mouse slipped. Upvoted the other reply, which is good, and hopefully the two will cancel out. I hope the upcoming site update will allow cancelling accidental votes.)

Share this post


Link to post
Share on other sites
phil67rpg    443

I am going to take lenny's advice and use the glutTimerfunc() I know it is deprecated but it might still work for me. I tried the windows.h and synchapi.h include files but it still does not work with the sleep() function. thanks for all the help

Share this post


Link to post
Share on other sites
slicer4ever    6760

I am going to take lenny's advice and use the glutTimerfunc() I know it is deprecated but it might still work for me. I tried the windows.h and synchapi.h include files but it still does not work with the sleep() function. thanks for all the help


if your using c++, then you should be using what Ryan_001 said, std::this_thread::sleep_for is part of the standard library since c++11. you don't need to be using an outside api for this.

Share this post


Link to post
Share on other sites
Cwhizard    100
for windows -
#include <windows.h>
VOID WINAPI Sleep(DWORD Milliseconds);


for linux 
# include <unistd.h>
unsigned int sleep(unsigned int Seconds);

Notice though that they use different arguments.  In windows its in milliseconds, and in linux its in full seconds

 

In general, on windows, if you just need to keep the thread from blocking other processes, use Sleep(0);  which will immediately return control to the calling thread if no other threads or processes are waiting.  Any value higher than 0 will guarantee that control does not return to the calling thread PRIOR to that time elapsing, but there is no guarantee of it happening exactly at that time.  So e.g. Sleep (1000); would not return before 1000 milliseconds had elapsed, but it could very well not return for 5 seconds, or 5 days.  If you want to actually wait a specified time, e.g. being laptop battery friendly, you should instead use timer functions such as 

UINT_PTR WINAPI SetTimer(...);

https://msdn.microsoft.com/en-us/library/windows/desktop/ms644906(v=vs.85).aspx

 

These can have better resolution, and callback timers in specific are virtually guaranteed to execute at the specified interval except on the most heavily overloaded system.  

Edited by Cwhizard

Share this post


Link to post
Share on other sites
Kaptein    2224

POSIX has usleep which can sleep for microseconds, although if the sleep is very short its probably just a pause loop.

If you are using a non-decade-old compiler you'll also have access to:

std::this_thread::sleep_for(std::chrono::microseconds(usec));

in C++, which will be portable.. if that means anything to you

Edited by Kaptein

Share this post


Link to post
Share on other sites
LennyLen    5715
1 hour ago, phil67rpg said:

actually I used windows.h and it works just fine

For future reference, when you're asking a question like this, it's best to tell us what you're trying to do. You'll get much better answers that way.

Share this post


Link to post
Share on other sites
LorenzoGatti    4442
12 hours ago, phil67rpg said:

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

In theory, they can share the same low-level sleeping mechanism and behave in exactly the same way, they could be completely unrelated, or they could be similar except for some details or special cases that might or might not be relevant for you.

In practice, try both and measure actual sleeping time accuracy, on every platform you care about: sleeping is important enough to deserve some testing effort.

Share this post


Link to post
Share on other sites
Ryan_001    3475
14 hours ago, phil67rpg said:

is there a difference between how sleep() and this_thread::sleep_for(chrono::microseconds(1000)) works?

https://msdn.microsoft.com/en-us/library/hh874757.aspx says: 

Quote

The implementation of steady_clock has changed to meet the C++ Standard requirements for steadiness and monotonicity. steady_clock is now based on QueryPerformanceCounter() and high_resolution_clock is now a typedef for steady_clock. As a result, in Visual C++ steady_clock::time_point is now a typedef for chrono::time_point<steady_clock>; however, this is not necessarily the case for other implementations.

Which is good because QPF is the best timer on a Windows system, is quite stable and has nanosecond precision.  If you're curious as to the details this link here: https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx.  How  exactly sleep_for() and sleep_until() are implemented, I'm not sure.

Sleep() we do know uses the internal system clock interrupt, which fires every few milliseconds (7-15ms from what I understand on most systems).

Without some benchmarking I can't say for sure, but I'm certain that sleep_for() and sleep_until() will be at least as accurate as Sleep() and possibly better.

Share this post


Link to post
Share on other sites
Ryan_001    3475

So I threw together a little test benchmark.  On my system (Windows 7, Intel i7) both sleep_for() and Sleep() were identical.

When VS (and a few other programs in the background) were open I was getting 1ms accuracy for both (+/- 10us or so).  So both performed well.  With everything closed though that number jumped up and would vary between 5ms and 15ms; but whatever the number was for sleep_for(), Sleep() was identical.  So either VS or some other program in the background was calling timeBeingPeriod(1).  It seems either way, under the hood, sleep_for() is identical to Sleep() on Windows 7 with VS 2017.

Share this post


Link to post
Share on other sites
frob    44904

For Windows the task scheduler wakes processes up from Sleep at 15.6 ms intervals. Every 15.6 milliseconds it runs through the list and wakes them up in order of priority.  This has been the case for decades, and an enormous body of programs rely on that.

On 6/25/2017 at 10:16 AM, Ryan_001 said:

On my system (Windows 7, Intel i7) both sleep_for() and Sleep() were identical. ... I was getting 1ms accuracy for both (+/- 10us or so).

The only way that should be true is if you happened to sleep for the same amount remaining for the wake interval, or if you (or some other process) altered the system's scheduling frequency.  While it is possible to adjust the scheduler's frequency to 1ms it has serious negative implications, such as greatly increasing power consumption and CPU load as other processes are running 15x more often, and can break a number of other applications that are expecting the standard frequency.

Google Chrome did that for a time, and it had some enormous backlash due to how it broke other systems. Back in late 2012 a creative Chrome developer realized that if he dropped the system's scheduler frequency to 1ms then Chrome would respond much faster to certain events. It was listed as a low-priority bug, until someone did some research that hit global media. Turns out the bug cost about 10 megawatts continuously, hurt laptop battery drain rates by 10%-40% depending on battery type, reduced available CPU processing generally by 2.5% to 5%, and broke a long list of applications.  After enormous backlash they realized that they needed to fix the bug, and finally did so.

If your Sleep on Windows is not waking at a 15.6 millisecond interaval then you need to fix your code so it works with that value.

Share this post


Link to post
Share on other sites
Ryan_001    3475
On 6/25/2017 at 11:16 AM, Ryan_001 said:

With everything closed though that number jumped up and would vary between 5ms and 15ms; but whatever the number was for sleep_for(), Sleep() was identical.  So either VS or some other program in the background was calling timeBeingPeriod(1).

The benchmark wasn't to verify the task scheduler on Windows, it was to determine whether sleep_for() did the same thing as Sleep() or did something different.  I found that at all times, background programs open or closed, irregardless of the sleep amount, that sleep_for() and Sleep() returned identical results.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By VanillaSnake21
      I've restructured some of my code to use namespaces and started getting problems in a module that was working correctly previously. The one in question is a DebugWindow, what happens is I give it a pointer to a variable that I want to monitor/change and it's job is to display that variable in a separate window along with a some + and - buttons to in/decrement the variable.
      These are the relevant portions:
       
      WindowManager.h
      namespace WindowManager { /* WindowManager functions snipped */ namespace DebugWindow { void AddView(double* vard, std::wstring desc, double increment); void AddView(std::wstring* vars, std::wstring desc); void CreateDebugWindow(int width, int height, int x, int y); } }  
       
      Application.cpp is the main app, it calls the above functions to set the watch on the variables I need to see in real-time
      void ApplicationInitialization() { //create the main window UINT windowID = SR::WindowManager::CreateNewWindow(LocalWindowsSettings); //initialize the rasterizer InitializeSoftwareRasterizer(SR::WindowManager::GetWindow(windowID)); //create the debug window SR::WindowManager::DebugWindow::CreateDebugWindow(400, LocalWindowsSettings.clientHeight, LocalWindowsSettings.clientPosition.x + LocalWindowsSettings.clientWidth, LocalWindowsSettings.clientPosition.y); //display some debug info SR::WindowManager::DebugWindow::AddView((double*)&gMouseX,TEXT("Mouse X"), 1); SR::WindowManager::DebugWindow::AddView((double*)&gMouseY, TEXT("Mouse Y"), 1); }  
      The variables gMouseX and Y are globals in my application, they are updated inside the App's WindProc inside the WM_MOUSEMOVE like so :
      case WM_MOUSEMOVE: { gMouseX = GET_X_LPARAM(lParam); gMouseY = GET_Y_LPARAM(lParam); /* .... */ }break;  
       
      Now inside the AddView() function that I'm calling to set the watch on the variable
      void AddView(double* vard, std::wstring desc, double increment) { _var v; v.vard = vard; // used when variable is a number v.vars = nullptr; // used when varialbe is a string (in this case it's not) v.desc = desc; v.increment = increment; mAddVariable(v); }  
      _var is just a structure I use to pass the variable definition and annotation inside the module, it's defined as such
      struct _var { double* vard; //use when variable is a number double increment; //value to increment/decrement in live-view std::wstring* vars; //use when variable is a string std::wstring desc; //description to be displayed next to the variable int minusControlID; int plusControlID; HWND viewControlEdit; //WinAPI windows associated with the display, TextEdit, and two buttons (P) for plus and (M) for minus. HWND viewControlBtnM; HWND viewControlBtnP; };  
      So after I call AddView it formats this structure and passes it on to mAddVariable(_var), here it is:
      void mAddVariable(_var variable) { //destroy and recreate a timer KillTimer(mDebugOutWindow, 1); SetTimer(mDebugOutWindow, 1, 10, (TIMERPROC)NULL); //convert the variable into readable string if it's a number std::wstring varString; if (variable.vard) varString = std::to_wstring(*variable.vard); else varString = *variable.vars; //create all the controls variable.viewControlEdit = CreateWindow(/*...*/); //text field control variable.minusControlID = (mVariables.size() - 1) * 2 + 1; variable.viewControlBtnM = CreateWindow(/*...*/); //minus button control variable.plusControlID = (mVariables.size() - 1) * 2 + 2; variable.viewControlBtnP = CreateWindow(/*...*/); //plus button control mVariables.push_back(variable); }  
      I then update the variable using a timer inside the DebugWindow msgproc
      case WM_TIMER: { switch (wParam) { case 1: // 1 is the id of the timer { for (_var v : mVariables) { SetWindowText(v.viewControlEdit, std::to_wstring(*v.vard).c_str()); } }break; default: break; } }; break;  
      When I examine the mVariables, their vard* is something like 1.48237482E-33#DEN. Why does this happen?
       
      Also to note is that I'm programming in C like fashion, without using any objects at all. The module consists of .h and .cpp file, whatever I expose in .h is public, if a function is only exposed in .cpp it's private . So even though I precede some functions with m_Function it's not in a member of a class but just means that it's not exposed in the header file, so it's only visible within this module.
       
      Thanks.
       
       
    • By khawk
      GameDev.net member @Bleys has released a C++ library that may be useful for game developers.

      Called DynaMix, the library:
      You can access the repository at https://github.com/iboB/dynamix and documentation at https://ibob.github.io/dynamix/.
      Learn more from the thread:
      .

      View full story
    • By khawk
      GameDev.net member @Bleys has released a C++ library that may be useful for game developers.

      Called DynaMix, the library:
      You can access the repository at https://github.com/iboB/dynamix and documentation at https://ibob.github.io/dynamix/.
      Learn more from the thread:
      .
    • By Bleys
      There's a C++ library I'm developing and while it's not specifically targeted at games, all projects that I know of which use it are games.

      It's called DynaMix. In short it allows you to compose and modify polymorphic objects at run time. This has proven to be rather useful in gameplay programming. Compared to more traditional ways to write gameplay, like scripting, it has some benefits (well, and some drawbacks).
      It's C++ so it usually is at least a bit faster (and in the cases that I know of a lot faster) and less power consuming than scripts You can reduce code complexity when you don't have a C++<->scripting language binding layer. You can reuse utility code between the core and gameplay subsystems (instead of having to rewrite it in the scripting language. Hotswapping is supported relatively easily achievable Still, it's C++ so I guess it's a bit harder to write, and impossible to delegate to game-designers and other non-programmers Because of this it has found a niche of sorts in mobile games, where the benefits from the performance and smaller power consumption outweigh the fact that the gameplay code is strictly programmer country (whereas desktop/console developers, might be less willing to pay this price)

      The repository is here: https://github.com/iboB/dynamix
      The docs are here: https://ibob.github.io/dynamix/

      I have written about it before back when it used to be called Boost.Mixin. I have since rebranded it and removed the dependency on Boost. Recently I released a new version and I'm using this as an opportunity to gather more feedback and, perhaps, maybe new users.

      So, any comments and questions are welcome
    • By Juliean
      So as the title (hopefully somewhat) says, I'm trying to achieve a spezialisation of a template class, based on whether or not the template-parameter is derived off of another (templated) class:
      // base class, specializations must match this signature template<typename Object> class ObjectSupplier { public: static constexpr size_t objectOffset = 1; static Object& GetClassObject(CallState& state) { return *state.GetAttribute<Object>(0); } }; // possible specialisation for all "Data"-classes, // which are actually derived classes using CRTP template<typename Derived> class ObjectSupplier<ecs::Data<Derived>> { public: static constexpr size_t objectOffset = 1; static Derived& GetClassObject(CallState& state) { return state.GetEntityAttribute(0)->GetData<Derived>(); } }; // ... now here's the problem: // this would work ... ObjectSupplier<ecs::Data<Transform>>::GetClassObject(state); // unfornately the actual object is "Transform", which is merely derived // of "ecs::Data<Transform>" and thus it calls the incorrect overload ObjectSupplier<Transform>::GetClassObject(state); The last two lines show the actual problem. I'm using this ObjectSupplier-class as part of my script-binding system, and thus its not possible/feasable to input the required base-class manually:
      template<typename Return, typename Class, typename... Args> void registerBindFunction(ClassFunction<Return, Class, Args...> pFunction, sys::StringView strFilter, std::wstring&& strDefinition) { using Supplier = ObjectSupplier<Class>; using RegistryEntry = BindFunctionRegistryEntry<ClassFunctionBindCommand<Supplier, Class, Return, Args...>>; registerClassBindEntry<RegistryEntry, Supplier, Return, Args...>(pFunction, false, strFilter, std::move(strDefinition), DEFAULT_INPUT, DEFAULT_OUTPUT); } registerBindFunction(&GenericObject::Func, ...); // needs to access the ObjectSupplier<ecs::Data<Derived>> overload registerBindFunction(&Transform::GetAbsoluteTransform, ...); // thats how it used to be before: registerObjectBindFunction(&GenericObject::Func, ...); // a few overloads that internally used a "EntityDataSupplier"-class registerEntityDataBindFunction(&GenericObject::GetAbsoluteTransform, ...); (well, it were possible, but I want this binding-code to be as simple as humanly possible; which is why I'm trying to not have to manually specify anything other than "I want to bind a function here").
      So, thats the gist of it. Any ideas on how to get this to work? I don't want to (again) have to create manual overloads for "registerBindFunctions", which there would have to be at least 5 (and all have a complex signature); but I'm certainly open to different approaches, if what I'm trying to achieve via template specialization isn't possible.
      Thanks!
       
  • Popular Now