• Create Account

### #Actualmartinis_shaken

Posted 07 November 2013 - 06:50 AM

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
float t = (16.66666666 - timer) * 1000000;
cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
}



https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

Again though, whether I have VSync enabled or not, whether it's 800fps or 60fps, and whether I implement timers to try to control the flow or not they ALL have the same stuttering problem.

### #8martinis_shaken

Posted 07 November 2013 - 06:40 AM

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
float t = (16.66666666 - timer) * 1000000;
cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
}



https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

### #7martinis_shaken

Posted 07 November 2013 - 06:40 AM

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
float t = (16.66666666 - timer) * 1000000;
cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
}



https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

### #6martinis_shaken

Posted 07 November 2013 - 06:38 AM

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
float t = (16.66666666 - timer) * 1000000;
cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
}



### #5martinis_shaken

Posted 07 November 2013 - 06:36 AM

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



### #4martinis_shaken

Posted 07 November 2013 - 06:35 AM

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



PARTNERS