Sign in to follow this  
martinis_shaken

OpenGL Stutter / Micro Stutter Even w/ VSync

Recommended Posts

My game has an issue with micro stuttering. Every second or two the game "jumps" a little, as if a few frames are missed. There is no tearing of the screen, just a small pause and then the jump forward. The issue occurs whether the character is moving or not, scrolling or not (just more noticeable when scrolling), etc. Without fail, every second or two, the game will just jerk/jump/stutter.
 
The issue is similar to this post, though VSync does not fix the problem.
 
----------------------------------------
60 Frames displayed in 1 second 
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 18ms
----------------------------------------
 
This is a sample debug output - It displays ever second the number of frames and how long each part took.
The logic handles the events, collisions, etc.
The render time calculates how long it takes to draw all the objects on the screen.
 
----------------------------------------
783 Frames displayed in 1 second 
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 4ms
----------------------------------------
 
The differences above denote VSync enabled and not enabled. I have a very high frame rate and the game never has a spike in processing. The rendering is a steady 3-5ms and the logic is always 1ms. The longest frame never exceeds 5ms unless VSync is enabled then each one is 16-18ms.
 
I started out with SDL 1.2, everything ran fine. Decided to implement OpenGL for more control and better frame rate, then the stuttering began. I thought it may be an issue with SDL so I upgraded to SDL2, still no change in the stutter.
 
 
The code I use to load start SDL, init GL, and load PNGs into textures, I have rewritten 2-3 times each. Anything that displays to the screen I have rewritten at least twice. 
 
 
I have taken all the code that is responsible for setting up opengl, sdl, loading an image from a png to gluint, and displaying it on the screen and yanked it out. I have posted in on a github here:
 
 
This code takes a background tile, sticks it in the top left corner, and moves it to the bottom left corner. During the image's journey from corner to corner, you should be able to see the stutter that occurs a couple of times. Even this very basic example has the same problem of stuttering.
 
 

If you have any questions or need additional info, please just ask.

 

I really, really appreciate your guys' help in this! It's the last hurdle to my engine working!

 
--------------------------------------------------------------------------
Systems:
Laptop with 2nd generation intel integrated graphics
Laptop with 1st generation intel integrated graphics
Desktop with i7 920 and 6870 radeon card
 
OS:
Linux - Ubuntu 13.10, 12.04
Windows 7 (mingw, but have also tried vc++ and the issue persists)
 
Each box has a Ubuntu and Windows 7 installation. Libraries and build environments are all sync'd. 
All drivers are up to date, all other games that run openGL work just fine.
Edited by martinis_shaken

Share this post


Link to post
Share on other sites

Found the exact same article. The guy's code was riddled with usages of SDL_GetTicks() to cap the framerate, and he doesn't use frame-independent movement.

 

My code uses deltas to move the character and environment, and I have the framerate both capped by VSync and not capped. Neither of these is repsonsible for the problem.

 

I just tore out all the code for blitting an image on the screen and compiled it independent of my code. I set up a single image (background tile of 320 x 320) and moved it across the screen an increment of 1 px per frame @ 60 fps. 

 

EXACT same problem.

 

The code literally loads an image, draws it with the above function, and just moves it 1px at a time.... STILL stutters. It doesn't get simpler, and I don't understand it.

Share this post


Link to post
Share on other sites
tonemgub    2008

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

 

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

 

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

 

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

 

Also, are you loading the image every time you draw it, or just once?

 

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by tonemgub

Share this post


Link to post
Share on other sites
mhagain    13430

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

 

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

Share this post


Link to post
Share on other sites

Thank you for your post and for the help!

 

I too thought it might be a timer issue, which is why I ripped all the timers out in my sample program. The sample program simply takes the image and each frame it moves its xPosition and yPosition +1. Since it's capped by VSync at 60fps, it moves 60 pixels per second across the screen. There are no timers that cap the framerate, it relies solely on VSync. 

 

As for the 16-18ms, it usually shows 18ms as the amount of time per frame. I don't understand why though, as the only thing controlling this is VSync (and my monitor is 60hz refresh rate). So 16.6ms would seem right to me, but each frame seems to just take 18ms. And this is with no timers, no frame limiting beyond VSync.

 

And when I do enable timers to delay, pause, nanosleep, etc. I wind up with the problem being amplified.

 

Also, the loading of the image occurs only once, I just tried setting the processor affinity to only run on the first core and the problem still persisted, and lastly, I reformatted yesterday with a fresh install of 13.10 and killed all other running processes and it still stutters.

 

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

 

Here is the code I took out that just scrolls the background image across the screen. You may need to tweak the sconstruct's paths for it to build, as I made it for my systems' environments.

 

Thank you!

 

 

Edit:

I have run the progam with high precision timers using Chrono from c++0x and I get the following output during jitter times

 

Frame Time: 16.6016ms
Frame Time: 19.5782ms
Frame Time: 13.6319ms
Frame Time: 16.6807ms
Frame Time: 16.5073ms
 
The above is an extreme example, but there are definitely moments where the framerate goes above 16.66 (see iamge):
 
7J84G7g.png
 
 

 

 

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

 

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

 

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

 

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

 

Also, are you loading the image every time you draw it, or just once?

 

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

I have in fact tried the glFinish(), as well as glFlush() and neither has impacted it =/ 

Aslo, github link to all the code posted above. Thank you!

 

 

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

 

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

Edited by martinis_shaken

Share this post


Link to post
Share on other sites
wintertime    4108

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

Share this post


Link to post
Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

 

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

 

 

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep 

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
     //Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";


The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
   float t = (16.66666666 - timer) * 1000000;
   cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
   this_thread::sleep_for(nanoseconds((long)t));
}

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

 

 

Again though, whether I have VSync enabled or not, whether it's 800fps or 60fps, and whether I implement timers to try to control the flow or not they ALL have the same stuttering problem.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

 

There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

When I disable the VSync and let it run at 800fps, the difference between each frame is even more noticeable:

 

3wxuSel.png

 

 

 

As you can see, sometimes the image takes 5ms to render, sometimes it takes .6ms. 

 

There is NOTHING different that happens from frame to frame, as you can see in the github.

Share this post


Link to post
Share on other sites
l0calh05t    1796


There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

Share this post


Link to post
Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

 

Yes, the stuttering persists even if I don't measure it or output it. 

Share this post


Link to post
Share on other sites

 


There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

 

 

Is it also expected to have the frame sometimes take 5ms and sometimes .3ms?  I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

 

Edit: Yes triple buffering is enabled and I tried disabling Intel Speedstep, virtualization, and disabled triple buffering and VSync via ~/.drirc file all to no avail.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites
tonemgub    2008

start = std::chrono::system_clock::now(); //Do work end = std::chrono::system_clock::now();

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

 

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

 

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

 

 

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

 

Also: why are you doing glClear AFTER SwapBuffers?

Edited by tonemgub

Share this post


Link to post
Share on other sites
l0calh05t    1796


Is it also expected to have the frame sometimes take 5ms and sometimes .3ms? I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

 

Absolutely. There might be a context switch and scheduler time slices are often quite long. Also, as tonemgub points out, you should really only be calling now() once per frame. (directly after swap is usually a good choice)

Share this post


Link to post
Share on other sites

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

 

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

 

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

 

 

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

 

Also: why are you doing glClear AFTER SwapBuffers?

 

 

 

Tonemgub, thanks for the response! I really appreciate everybody helping out.

 

Firstly, I switched up the timers as you suggested, but saw no big difference. I will keep it in there though as I'm sure it is at least a tiny improvement.

 

As for the double precision, I don't think I have a single double in the code, just all floats (and as you said probably not an OpenGL problem, but thank you for mentioning it and covering all possible solutions).

 

Regarding SDL - I took out the SDL_Event polling and the stuttering continued. And as for doing the glClear() after swap_buffers() - I do it because it's the same thing as doing it at the beginning of the loop at that point. If I do it right before swap_buffers it will just erase all the work that "render()" has done and will display a blank screen

Share this post


Link to post
Share on other sites

Another update:

 

I have compiled and executed http://lazyfoo.net/tutorials/OpenGL/06_loading_a_texture/index.php

 

This code essentially loads a '.png' file and displays it on the screen.

I modified it to then start scrolling the png from the top left to the bottom right (same as in the sample program on github) and the problem occurs in FreeGlut too........

 

This is awful. It is the EXACT same problem, so I know it can't be the SDL code. This leaves the OpenGL code or a driver issue from hell.

 

One thing that makes no sense though, is that I have compiled and run the source code from the game Gish. https://github.com/blinry/gish

This runs smoothly on my screen and it uses SDL 1.2 and OpenGL.

 

And just to re-emphasize, this problem occurs on Windows and Linux on two laptops with different intel drivers and on a desktop with intel cpu and an AMD video card. 

 

 

Can anybody confirm that the code posted to github also stutters? (if you do install sdl2 from the apt-get , the sdl2_ttf is not there, you can just remove the linking from the sconstruct as it is not used in the example anyway). The below should be all you need to install

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

sudo apt-get install libsdl2-dev 
sudo apt-get install libsdl2-image-dev 

If you are unfamiliar with scons, all you need to do is call "scons" in the root of the directory (same spot as the sconstruct), same as you would "make"

 

 

If this same code does not stutter on anybody else's machine then I am in awe of this problem.

 

Thank you very much for everybody's continuing help. If anybody can solve this, it's the gamedev community.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

 

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

 

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

Share this post


Link to post
Share on other sites
mark ds    1786

I remember something like this from many years ago.

 

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

 

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

Share this post


Link to post
Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

 

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

 

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

 

Erik, thank you very much for running it on your system. I really appreciate the help.

 

I am definitely running in windowed mode, and I noticed when I went to fullscreen in Windows the problem almost completely disappears. In windowed mode though, the stutter can still be seen and this problem does not occur with other games. Your explanation of not being able to have perfect VSync makes sense though, thank you.

As for Linux though, it still does not explain why the stutter is so prevalent - and this is with glut or with SDL. It's like it takes the Windows problem and enhances it. And I have the latest drivers on 12.04 and on 13.04, but for 13.10 it is just rolled into the OS. I wonder if this happens on CentOS or another distro....

 

Still doesn't make sense to me why other games don't have the same problem, such as the Gish game. Same environment, with no apps running, etc.

Edited by martinis_shaken

Share this post


Link to post
Share on other sites

I remember something like this from many years ago.

 

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

 

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

 

Yep, definitely happens if I look away and use peripheral. Didn't know that could happen though, that's cool. I will check now to ensure that the monitor is in fact at 60hz, but if it isn't then VSync should be able to figure that out I would hope. There is something with frames being dropped though, some way some how.

 

Edit:

 

god@god-laptop:~$ xrandr
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 309mm x 174mm
   1366x768       60.0*+   40.0  
   1360x768       59.8     60.0  
   1024x768       60.0  
   800x600        60.3     56.2  
   640x480        59.9  
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
 
 
Yes it does run at 60fps, and when I try setting it to 40 it all flickers like crazy.
Edited by martinis_shaken

Share this post


Link to post
Share on other sites
Kaptein    2224

I had stuttering issues myself, but it only happened on windowed linux and windowed AND fullscreen windows

My imperfect solution was to interpolate player camera rotation and movement (separately)

I didn't interpolate movement or rotation before, so when i did with rotation it became really smooth.

After that I just added weight to player position (very stupid 'fix',) but it actually works ok

 

like

player.xyz = oldPlayer.xyz * weight  +  newPlayer.xyz * (1.0 - weight);

 

where old and new are only updated each time the physics thread is updated

I'ts not a solution, but if it makes things smooth for you, like it did for me, at least we both know the reason smile.png

the physics thread just didn't update regularly enough because of the variable amount of background work it does and the irregularities in the update frequency

 

Also, for rotation i just interpolated pitch/yaw/roll, because that made things simpler (no need for slerp)

Edited by Kaptein

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now