• Advertisement
Sign in to follow this  

OpenGL Stutter / Micro Stutter Even w/ VSync

This topic is 1513 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My game has an issue with micro stuttering. Every second or two the game "jumps" a little, as if a few frames are missed. There is no tearing of the screen, just a small pause and then the jump forward. The issue occurs whether the character is moving or not, scrolling or not (just more noticeable when scrolling), etc. Without fail, every second or two, the game will just jerk/jump/stutter.
 
The issue is similar to this post, though VSync does not fix the problem.
 
----------------------------------------
60 Frames displayed in 1 second 
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 18ms
----------------------------------------
 
This is a sample debug output - It displays ever second the number of frames and how long each part took.
The logic handles the events, collisions, etc.
The render time calculates how long it takes to draw all the objects on the screen.
 
----------------------------------------
783 Frames displayed in 1 second 
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 4ms
----------------------------------------
 
The differences above denote VSync enabled and not enabled. I have a very high frame rate and the game never has a spike in processing. The rendering is a steady 3-5ms and the logic is always 1ms. The longest frame never exceeds 5ms unless VSync is enabled then each one is 16-18ms.
 
I started out with SDL 1.2, everything ran fine. Decided to implement OpenGL for more control and better frame rate, then the stuttering began. I thought it may be an issue with SDL so I upgraded to SDL2, still no change in the stutter.
 
 
The code I use to load start SDL, init GL, and load PNGs into textures, I have rewritten 2-3 times each. Anything that displays to the screen I have rewritten at least twice. 
 
 
I have taken all the code that is responsible for setting up opengl, sdl, loading an image from a png to gluint, and displaying it on the screen and yanked it out. I have posted in on a github here:
 
 
This code takes a background tile, sticks it in the top left corner, and moves it to the bottom left corner. During the image's journey from corner to corner, you should be able to see the stutter that occurs a couple of times. Even this very basic example has the same problem of stuttering.
 
 

If you have any questions or need additional info, please just ask.

 

I really, really appreciate your guys' help in this! It's the last hurdle to my engine working!

 
--------------------------------------------------------------------------
Systems:
Laptop with 2nd generation intel integrated graphics
Laptop with 1st generation intel integrated graphics
Desktop with i7 920 and 6870 radeon card
 
OS:
Linux - Ubuntu 13.10, 12.04
Windows 7 (mingw, but have also tried vc++ and the issue persists)
 
Each box has a Ubuntu and Windows 7 installation. Libraries and build environments are all sync'd. 
All drivers are up to date, all other games that run openGL work just fine.
Edited by martinis_shaken

Share this post


Link to post
Share on other sites
Advertisement

Found the exact same article. The guy's code was riddled with usages of SDL_GetTicks() to cap the framerate, and he doesn't use frame-independent movement.

 

My code uses deltas to move the character and environment, and I have the framerate both capped by VSync and not capped. Neither of these is repsonsible for the problem.

 

I just tore out all the code for blitting an image on the screen and compiled it independent of my code. I set up a single image (background tile of 320 x 320) and moved it across the screen an increment of 1 px per frame @ 60 fps. 

 

EXACT same problem.

 

The code literally loads an image, draws it with the above function, and just moves it 1px at a time.... STILL stutters. It doesn't get simpler, and I don't understand it.

Share this post


Link to post
Share on other sites

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

 

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

 

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

 

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

 

Also, are you loading the image every time you draw it, or just once?

 

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by tonemgub

Share this post


Link to post
Share on other sites

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

 

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

Share this post


Link to post
Share on other sites

Thank you for your post and for the help!

 

I too thought it might be a timer issue, which is why I ripped all the timers out in my sample program. The sample program simply takes the image and each frame it moves its xPosition and yPosition +1. Since it's capped by VSync at 60fps, it moves 60 pixels per second across the screen. There are no timers that cap the framerate, it relies solely on VSync. 

 

As for the 16-18ms, it usually shows 18ms as the amount of time per frame. I don't understand why though, as the only thing controlling this is VSync (and my monitor is 60hz refresh rate). So 16.6ms would seem right to me, but each frame seems to just take 18ms. And this is with no timers, no frame limiting beyond VSync.

 

And when I do enable timers to delay, pause, nanosleep, etc. I wind up with the problem being amplified.

 

Also, the loading of the image occurs only once, I just tried setting the processor affinity to only run on the first core and the problem still persisted, and lastly, I reformatted yesterday with a fresh install of 13.10 and killed all other running processes and it still stutters.

 

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

 

Here is the code I took out that just scrolls the background image across the screen. You may need to tweak the sconstruct's paths for it to build, as I made it for my systems' environments.

 

Thank you!

 

 

Edit:

I have run the progam with high precision timers using Chrono from c++0x and I get the following output during jitter times

 

Frame Time: 16.6016ms
Frame Time: 19.5782ms
Frame Time: 13.6319ms
Frame Time: 16.6807ms
Frame Time: 16.5073ms
 
The above is an extreme example, but there are definitely moments where the framerate goes above 16.66 (see iamge):
 
7J84G7g.png
 
 

 

 

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

 

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

 

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

 

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

 

Also, are you loading the image every time you draw it, or just once?

 

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

I have in fact tried the glFinish(), as well as glFlush() and neither has impacted it =/ 

Aslo, github link to all the code posted above. Thank you!

 

 

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

 

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Share this post


Link to post
Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

 

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

 

 

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep 

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
     //Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";


The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
   float t = (16.66666666 - timer) * 1000000;
   cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
   this_thread::sleep_for(nanoseconds((long)t));
}

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

 

 

Again though, whether I have VSync enabled or not, whether it's 800fps or 60fps, and whether I implement timers to try to control the flow or not they ALL have the same stuttering problem.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

Share this post


Link to post
Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

 

There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

When I disable the VSync and let it run at 800fps, the difference between each frame is even more noticeable:

 

3wxuSel.png

 

 

 

As you can see, sometimes the image takes 5ms to render, sometimes it takes .6ms. 

 

There is NOTHING different that happens from frame to frame, as you can see in the github.

Share this post


Link to post
Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

Share this post


Link to post
Share on other sites


There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

Share this post


Link to post
Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

 

Yes, the stuttering persists even if I don't measure it or output it. 

Share this post


Link to post
Share on other sites

 


There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

 

 

Is it also expected to have the frame sometimes take 5ms and sometimes .3ms?  I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

 

Edit: Yes triple buffering is enabled and I tried disabling Intel Speedstep, virtualization, and disabled triple buffering and VSync via ~/.drirc file all to no avail.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

start = std::chrono::system_clock::now(); //Do work end = std::chrono::system_clock::now();

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

 

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

 

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

 

 

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

 

Also: why are you doing glClear AFTER SwapBuffers?

Edited by tonemgub

Share this post


Link to post
Share on other sites


Is it also expected to have the frame sometimes take 5ms and sometimes .3ms? I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

 

Absolutely. There might be a context switch and scheduler time slices are often quite long. Also, as tonemgub points out, you should really only be calling now() once per frame. (directly after swap is usually a good choice)

Share this post


Link to post
Share on other sites

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

 

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

 

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

 

 

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

 

Also: why are you doing glClear AFTER SwapBuffers?

 

 

 

Tonemgub, thanks for the response! I really appreciate everybody helping out.

 

Firstly, I switched up the timers as you suggested, but saw no big difference. I will keep it in there though as I'm sure it is at least a tiny improvement.

 

As for the double precision, I don't think I have a single double in the code, just all floats (and as you said probably not an OpenGL problem, but thank you for mentioning it and covering all possible solutions).

 

Regarding SDL - I took out the SDL_Event polling and the stuttering continued. And as for doing the glClear() after swap_buffers() - I do it because it's the same thing as doing it at the beginning of the loop at that point. If I do it right before swap_buffers it will just erase all the work that "render()" has done and will display a blank screen

Share this post


Link to post
Share on other sites

Another update:

 

I have compiled and executed http://lazyfoo.net/tutorials/OpenGL/06_loading_a_texture/index.php

 

This code essentially loads a '.png' file and displays it on the screen.

I modified it to then start scrolling the png from the top left to the bottom right (same as in the sample program on github) and the problem occurs in FreeGlut too........

 

This is awful. It is the EXACT same problem, so I know it can't be the SDL code. This leaves the OpenGL code or a driver issue from hell.

 

One thing that makes no sense though, is that I have compiled and run the source code from the game Gish. https://github.com/blinry/gish

This runs smoothly on my screen and it uses SDL 1.2 and OpenGL.

 

And just to re-emphasize, this problem occurs on Windows and Linux on two laptops with different intel drivers and on a desktop with intel cpu and an AMD video card. 

 

 

Can anybody confirm that the code posted to github also stutters? (if you do install sdl2 from the apt-get , the sdl2_ttf is not there, you can just remove the linking from the sconstruct as it is not used in the example anyway). The below should be all you need to install

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

sudo apt-get install libsdl2-dev 
sudo apt-get install libsdl2-image-dev 

If you are unfamiliar with scons, all you need to do is call "scons" in the root of the directory (same spot as the sconstruct), same as you would "make"

 

 

If this same code does not stutter on anybody else's machine then I am in awe of this problem.

 

Thank you very much for everybody's continuing help. If anybody can solve this, it's the gamedev community.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

 

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

 

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

Share this post


Link to post
Share on other sites

I remember something like this from many years ago.

 

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

 

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

Share this post


Link to post
Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

 

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

 

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

 

Erik, thank you very much for running it on your system. I really appreciate the help.

 

I am definitely running in windowed mode, and I noticed when I went to fullscreen in Windows the problem almost completely disappears. In windowed mode though, the stutter can still be seen and this problem does not occur with other games. Your explanation of not being able to have perfect VSync makes sense though, thank you.

As for Linux though, it still does not explain why the stutter is so prevalent - and this is with glut or with SDL. It's like it takes the Windows problem and enhances it. And I have the latest drivers on 12.04 and on 13.04, but for 13.10 it is just rolled into the OS. I wonder if this happens on CentOS or another distro....

 

Still doesn't make sense to me why other games don't have the same problem, such as the Gish game. Same environment, with no apps running, etc.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

I remember something like this from many years ago.

 

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

 

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

 

Yep, definitely happens if I look away and use peripheral. Didn't know that could happen though, that's cool. I will check now to ensure that the monitor is in fact at 60hz, but if it isn't then VSync should be able to figure that out I would hope. There is something with frames being dropped though, some way some how.

 

Edit:

 

god@god-laptop:~$ xrandr
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 309mm x 174mm
   1366x768       60.0*+   40.0  
   1360x768       59.8     60.0  
   1024x768       60.0  
   800x600        60.3     56.2  
   640x480        59.9  
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
 
 
Yes it does run at 60fps, and when I try setting it to 40 it all flickers like crazy.
Edited by martinis_shaken

Share this post


Link to post
Share on other sites

I had stuttering issues myself, but it only happened on windowed linux and windowed AND fullscreen windows

My imperfect solution was to interpolate player camera rotation and movement (separately)

I didn't interpolate movement or rotation before, so when i did with rotation it became really smooth.

After that I just added weight to player position (very stupid 'fix',) but it actually works ok

 

like

player.xyz = oldPlayer.xyz * weight  +  newPlayer.xyz * (1.0 - weight);

 

where old and new are only updated each time the physics thread is updated

I'ts not a solution, but if it makes things smooth for you, like it did for me, at least we both know the reason smile.png

the physics thread just didn't update regularly enough because of the variable amount of background work it does and the irregularities in the update frequency

 

Also, for rotation i just interpolated pitch/yaw/roll, because that made things simpler (no need for slerp)

Edited by Kaptein

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement