Followers 0

# OpenGL Stutter / Micro Stutter Even w/ VSync

## 35 posts in this topic

My game has an issue with micro stuttering. Every second or two the game "jumps" a little, as if a few frames are missed. There is no tearing of the screen, just a small pause and then the jump forward. The issue occurs whether the character is moving or not, scrolling or not (just more noticeable when scrolling), etc. Without fail, every second or two, the game will just jerk/jump/stutter.

The issue is similar to this post, though VSync does not fix the problem.

----------------------------------------
60 Frames displayed in 1 second
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 18ms
----------------------------------------

This is a sample debug output - It displays ever second the number of frames and how long each part took.
The logic handles the events, collisions, etc.
The render time calculates how long it takes to draw all the objects on the screen.

----------------------------------------
783 Frames displayed in 1 second
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 4ms
----------------------------------------

The differences above denote VSync enabled and not enabled. I have a very high frame rate and the game never has a spike in processing. The rendering is a steady 3-5ms and the logic is always 1ms. The longest frame never exceeds 5ms unless VSync is enabled then each one is 16-18ms.

I started out with SDL 1.2, everything ran fine. Decided to implement OpenGL for more control and better frame rate, then the stuttering began. I thought it may be an issue with SDL so I upgraded to SDL2, still no change in the stutter.

The code I use to load start SDL, init GL, and load PNGs into textures, I have rewritten 2-3 times each. Anything that displays to the screen I have rewritten at least twice.

I have taken all the code that is responsible for setting up opengl, sdl, loading an image from a png to gluint, and displaying it on the screen and yanked it out. I have posted in on a github here:

This code takes a background tile, sticks it in the top left corner, and moves it to the bottom left corner. During the image's journey from corner to corner, you should be able to see the stutter that occurs a couple of times. Even this very basic example has the same problem of stuttering.

I really, really appreciate your guys' help in this! It's the last hurdle to my engine working!

--------------------------------------------------------------------------
Systems:
Laptop with 2nd generation intel integrated graphics
Laptop with 1st generation intel integrated graphics
Desktop with i7 920 and 6870 radeon card

OS:
Linux - Ubuntu 13.10, 12.04
Windows 7 (mingw, but have also tried vc++ and the issue persists)

Each box has a Ubuntu and Windows 7 installation. Libraries and build environments are all sync'd.
All drivers are up to date, all other games that run openGL work just fine.
Edited by martinis_shaken
0

##### Share on other sites

Found the exact same article. The guy's code was riddled with usages of SDL_GetTicks() to cap the framerate, and he doesn't use frame-independent movement.

My code uses deltas to move the character and environment, and I have the framerate both capped by VSync and not capped. Neither of these is repsonsible for the problem.

I just tore out all the code for blitting an image on the screen and compiled it independent of my code. I set up a single image (background tile of 320 x 320) and moved it across the screen an increment of 1 px per frame @ 60 fps.

EXACT same problem.

The code literally loads an image, draws it with the above function, and just moves it 1px at a time.... STILL stutters. It doesn't get simpler, and I don't understand it.

0

##### Share on other sites

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

Also, are you loading the image every time you draw it, or just once?

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by tonemgub
1

##### Share on other sites

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

1

##### Share on other sites

Thank you for your post and for the help!

I too thought it might be a timer issue, which is why I ripped all the timers out in my sample program. The sample program simply takes the image and each frame it moves its xPosition and yPosition +1. Since it's capped by VSync at 60fps, it moves 60 pixels per second across the screen. There are no timers that cap the framerate, it relies solely on VSync.

As for the 16-18ms, it usually shows 18ms as the amount of time per frame. I don't understand why though, as the only thing controlling this is VSync (and my monitor is 60hz refresh rate). So 16.6ms would seem right to me, but each frame seems to just take 18ms. And this is with no timers, no frame limiting beyond VSync.

And when I do enable timers to delay, pause, nanosleep, etc. I wind up with the problem being amplified.

Also, the loading of the image occurs only once, I just tried setting the processor affinity to only run on the first core and the problem still persisted, and lastly, I reformatted yesterday with a fresh install of 13.10 and killed all other running processes and it still stutters.

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the code I took out that just scrolls the background image across the screen. You may need to tweak the sconstruct's paths for it to build, as I made it for my systems' environments.

Thank you!

Edit:

I have run the progam with high precision timers using Chrono from c++0x and I get the following output during jitter times

Frame Time: 16.6016ms
Frame Time: 19.5782ms
Frame Time: 13.6319ms
Frame Time: 16.6807ms
Frame Time: 16.5073ms

The above is an extreme example, but there are definitely moments where the framerate goes above 16.66 (see iamge):

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

Also, are you loading the image every time you draw it, or just once?

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by martinis_shaken
0

##### Share on other sites

I have in fact tried the glFinish(), as well as glFlush() and neither has impacted it =/

Aslo, github link to all the code posted above. Thank you!

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

Edited by martinis_shaken
0

##### Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

1

##### Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
//Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";



The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
float t = (16.66666666 - timer) * 1000000;
cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
}



https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

Again though, whether I have VSync enabled or not, whether it's 800fps or 60fps, and whether I implement timers to try to control the flow or not they ALL have the same stuttering problem.

Edited by martinis_shaken
0

##### Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

1

##### Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

When I disable the VSync and let it run at 800fps, the difference between each frame is even more noticeable:

As you can see, sometimes the image takes 5ms to render, sometimes it takes .6ms.

There is NOTHING different that happens from frame to frame, as you can see in the github.

0

##### Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

1

##### Share on other sites

There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

0

##### Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

Yes, the stuttering persists even if I don't measure it or output it.

0

##### Share on other sites

There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

Is it also expected to have the frame sometimes take 5ms and sometimes .3ms?  I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

Edit: Yes triple buffering is enabled and I tried disabling Intel Speedstep, virtualization, and disabled triple buffering and VSync via ~/.drirc file all to no avail.

Edited by martinis_shaken
0

##### Share on other sites

start = std::chrono::system_clock::now(); //Do work end = std::chrono::system_clock::now();

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

Also: why are you doing glClear AFTER SwapBuffers?

Edited by tonemgub
1

##### Share on other sites

Is it also expected to have the frame sometimes take 5ms and sometimes .3ms? I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

Absolutely. There might be a context switch and scheduler time slices are often quite long. Also, as tonemgub points out, you should really only be calling now() once per frame. (directly after swap is usually a good choice)

1

##### Share on other sites

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

Also: why are you doing glClear AFTER SwapBuffers?

Tonemgub, thanks for the response! I really appreciate everybody helping out.

Firstly, I switched up the timers as you suggested, but saw no big difference. I will keep it in there though as I'm sure it is at least a tiny improvement.

As for the double precision, I don't think I have a single double in the code, just all floats (and as you said probably not an OpenGL problem, but thank you for mentioning it and covering all possible solutions).

Regarding SDL - I took out the SDL_Event polling and the stuttering continued. And as for doing the glClear() after swap_buffers() - I do it because it's the same thing as doing it at the beginning of the loop at that point. If I do it right before swap_buffers it will just erase all the work that "render()" has done and will display a blank screen

0

##### Share on other sites

Another update:

This code essentially loads a '.png' file and displays it on the screen.

I modified it to then start scrolling the png from the top left to the bottom right (same as in the sample program on github) and the problem occurs in FreeGlut too........

This is awful. It is the EXACT same problem, so I know it can't be the SDL code. This leaves the OpenGL code or a driver issue from hell.

One thing that makes no sense though, is that I have compiled and run the source code from the game Gish. https://github.com/blinry/gish

This runs smoothly on my screen and it uses SDL 1.2 and OpenGL.

And just to re-emphasize, this problem occurs on Windows and Linux on two laptops with different intel drivers and on a desktop with intel cpu and an AMD video card.

Can anybody confirm that the code posted to github also stutters? (if you do install sdl2 from the apt-get , the sdl2_ttf is not there, you can just remove the linking from the sconstruct as it is not used in the example anyway). The below should be all you need to install

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

sudo apt-get install libsdl2-dev
sudo apt-get install libsdl2-image-dev


If you are unfamiliar with scons, all you need to do is call "scons" in the root of the directory (same spot as the sconstruct), same as you would "make"

If this same code does not stutter on anybody else's machine then I am in awe of this problem.

Thank you very much for everybody's continuing help. If anybody can solve this, it's the gamedev community.

Edited by martinis_shaken
0

##### Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

1

##### Share on other sites

I remember something like this from many years ago.

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

1

##### Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

Erik, thank you very much for running it on your system. I really appreciate the help.

I am definitely running in windowed mode, and I noticed when I went to fullscreen in Windows the problem almost completely disappears. In windowed mode though, the stutter can still be seen and this problem does not occur with other games. Your explanation of not being able to have perfect VSync makes sense though, thank you.

As for Linux though, it still does not explain why the stutter is so prevalent - and this is with glut or with SDL. It's like it takes the Windows problem and enhances it. And I have the latest drivers on 12.04 and on 13.04, but for 13.10 it is just rolled into the OS. I wonder if this happens on CentOS or another distro....

Still doesn't make sense to me why other games don't have the same problem, such as the Gish game. Same environment, with no apps running, etc.

Edited by martinis_shaken
0

##### Share on other sites

I remember something like this from many years ago.

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

Yep, definitely happens if I look away and use peripheral. Didn't know that could happen though, that's cool. I will check now to ensure that the monitor is in fact at 60hz, but if it isn't then VSync should be able to figure that out I would hope. There is something with frames being dropped though, some way some how.

Edit:

god@god-laptop:~\$ xrandr
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 309mm x 174mm
1366x768       60.0*+   40.0
1360x768       59.8     60.0
1024x768       60.0
800x600        60.3     56.2
640x480        59.9
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)

Yes it does run at 60fps, and when I try setting it to 40 it all flickers like crazy.
Edited by martinis_shaken
0

##### Share on other sites

I had stuttering issues myself, but it only happened on windowed linux and windowed AND fullscreen windows

My imperfect solution was to interpolate player camera rotation and movement (separately)

I didn't interpolate movement or rotation before, so when i did with rotation it became really smooth.

After that I just added weight to player position (very stupid 'fix',) but it actually works ok

like

player.xyz = oldPlayer.xyz * weight  +  newPlayer.xyz * (1.0 - weight);

where old and new are only updated each time the physics thread is updated

I'ts not a solution, but if it makes things smooth for you, like it did for me, at least we both know the reason

the physics thread just didn't update regularly enough because of the variable amount of background work it does and the irregularities in the update frequency

Also, for rotation i just interpolated pitch/yaw/roll, because that made things simpler (no need for slerp)

Edited by Kaptein
0

##### Share on other sites

Do you have a virus scanner active?

0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By DaniDesu
#include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
#pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
#pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
#include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
#pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
#include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
#pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
#include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
#version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
#version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
(Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.

uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
• By yahiko00
Hi,
Not sure to post at the right place, if not, please forgive me...
For a game project I am working on, I would like to implement a 2D starfield as a background.
I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

Is there someone who could have an idea of a distribution which could result in such a starfield?
Any insight would be appreciated

• 28
• 13
• 11
• 31
• 20