• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
martinis_shaken

OpenGL
Stutter / Micro Stutter Even w/ VSync

35 posts in this topic

My game has an issue with micro stuttering. Every second or two the game "jumps" a little, as if a few frames are missed. There is no tearing of the screen, just a small pause and then the jump forward. The issue occurs whether the character is moving or not, scrolling or not (just more noticeable when scrolling), etc. Without fail, every second or two, the game will just jerk/jump/stutter.
 
The issue is similar to this post, though VSync does not fix the problem.
 
----------------------------------------
60 Frames displayed in 1 second 
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 18ms
----------------------------------------
 
This is a sample debug output - It displays ever second the number of frames and how long each part took.
The logic handles the events, collisions, etc.
The render time calculates how long it takes to draw all the objects on the screen.
 
----------------------------------------
783 Frames displayed in 1 second 
Logest logic time: 1ms
Logest render time: 3ms
Logest frame time: 4ms
----------------------------------------
 
The differences above denote VSync enabled and not enabled. I have a very high frame rate and the game never has a spike in processing. The rendering is a steady 3-5ms and the logic is always 1ms. The longest frame never exceeds 5ms unless VSync is enabled then each one is 16-18ms.
 
I started out with SDL 1.2, everything ran fine. Decided to implement OpenGL for more control and better frame rate, then the stuttering began. I thought it may be an issue with SDL so I upgraded to SDL2, still no change in the stutter.
 
 
The code I use to load start SDL, init GL, and load PNGs into textures, I have rewritten 2-3 times each. Anything that displays to the screen I have rewritten at least twice. 
 
 
I have taken all the code that is responsible for setting up opengl, sdl, loading an image from a png to gluint, and displaying it on the screen and yanked it out. I have posted in on a github here:
 
 
This code takes a background tile, sticks it in the top left corner, and moves it to the bottom left corner. During the image's journey from corner to corner, you should be able to see the stutter that occurs a couple of times. Even this very basic example has the same problem of stuttering.
 
 

If you have any questions or need additional info, please just ask.

 

I really, really appreciate your guys' help in this! It's the last hurdle to my engine working!

 
--------------------------------------------------------------------------
Systems:
Laptop with 2nd generation intel integrated graphics
Laptop with 1st generation intel integrated graphics
Desktop with i7 920 and 6870 radeon card
 
OS:
Linux - Ubuntu 13.10, 12.04
Windows 7 (mingw, but have also tried vc++ and the issue persists)
 
Each box has a Ubuntu and Windows 7 installation. Libraries and build environments are all sync'd. 
All drivers are up to date, all other games that run openGL work just fine.
Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

Found the exact same article. The guy's code was riddled with usages of SDL_GetTicks() to cap the framerate, and he doesn't use frame-independent movement.

 

My code uses deltas to move the character and environment, and I have the framerate both capped by VSync and not capped. Neither of these is repsonsible for the problem.

 

I just tore out all the code for blitting an image on the screen and compiled it independent of my code. I set up a single image (background tile of 320 x 320) and moved it across the screen an increment of 1 px per frame @ 60 fps. 

 

EXACT same problem.

 

The code literally loads an image, draws it with the above function, and just moves it 1px at a time.... STILL stutters. It doesn't get simpler, and I don't understand it.

0

Share this post


Link to post
Share on other sites

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

 

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

 

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

 

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

 

Also, are you loading the image every time you draw it, or just once?

 

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by tonemgub
1

Share this post


Link to post
Share on other sites

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

 

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

1

Share this post


Link to post
Share on other sites

Thank you for your post and for the help!

 

I too thought it might be a timer issue, which is why I ripped all the timers out in my sample program. The sample program simply takes the image and each frame it moves its xPosition and yPosition +1. Since it's capped by VSync at 60fps, it moves 60 pixels per second across the screen. There are no timers that cap the framerate, it relies solely on VSync. 

 

As for the 16-18ms, it usually shows 18ms as the amount of time per frame. I don't understand why though, as the only thing controlling this is VSync (and my monitor is 60hz refresh rate). So 16.6ms would seem right to me, but each frame seems to just take 18ms. And this is with no timers, no frame limiting beyond VSync.

 

And when I do enable timers to delay, pause, nanosleep, etc. I wind up with the problem being amplified.

 

Also, the loading of the image occurs only once, I just tried setting the processor affinity to only run on the first core and the problem still persisted, and lastly, I reformatted yesterday with a fresh install of 13.10 and killed all other running processes and it still stutters.

 

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

 

Here is the code I took out that just scrolls the background image across the screen. You may need to tweak the sconstruct's paths for it to build, as I made it for my systems' environments.

 

Thank you!

 

 

Edit:

I have run the progam with high precision timers using Chrono from c++0x and I get the following output during jitter times

 

Frame Time: 16.6016ms
Frame Time: 19.5782ms
Frame Time: 13.6319ms
Frame Time: 16.6807ms
Frame Time: 16.5073ms
 
The above is an extreme example, but there are definitely moments where the framerate goes above 16.66 (see iamge):
 
7J84G7g.png
 
 

 

 

Maybe the problem is with your timer source. Try running your code on a single CPU core (SetProcessAffinityMask on Windows) - if it still stutters, then it must be that the timer you're using is having sync problems across core switches. As you said, other games run fine, so the problem is probably not with OpenGL - it must be somewhere in your time-keeping code.

 

Also, try to make sure that what you're seeing isn't tearing. If you enable VSync, you can eleminate that possibility.

 

And with VSync, you said that your "frame time" is somewhere between 16-18? On a 60Hz monitor, any frame that lasts longer than 1000/60=16.6 ms will be dropped or delayed, so I would also look into what is causing that.

 

In your second post, it's not clear what you mean by "1px at a time" - are you still using your timer in this case, or just drawing the image continuously? If you're just drawing continuously (no timer delays in between) then with VSync enabled, you shouldn't be getting any stuttering.

 

Also, are you loading the image every time you draw it, or just once?

 

If you try all this and it still doesn't work, then the problem must be external to your program - try to find out what other (background) programs are causing CPU spikes every 1-2 seconds.

Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

I have in fact tried the glFinish(), as well as glFlush() and neither has impacted it =/ 

Aslo, github link to all the code posted above. Thank you!

 

 

Can you post your program's main loop (i.e where your timing functions run and where you draw the frame from)?

 

Also - can you try putting a glFinish before your SwapBuffers call and see if that resolves anything?

Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

1

Share this post


Link to post
Share on other sites

How are you even measuring those 16-18ms? I saw no timer calls in the code. If you use something with a granularity of only 1ms it can easily be a starting time 1µs before the timer updates and look like it would add a whole ms and same at the end.

 

Just edited the above post to show the amount of time the frames are taking, and yes my timer granularity was not sufficient.

 

 

Below is the timer code I have added. I also used this_thread::sleep_for(nanoseconds(.....))   to sleep 

std::chrono::time_point<std::chrono::system_clock> start, end;

start = std::chrono::system_clock::now();
     //Do work
end = std::chrono::system_clock::now();

std::chrono::duration<double> elapsed_seconds = end-start;

float timer = elapsed_seconds.count() * 1000;
std::cout<< "Frame Time: " << timer << "ms\n";


The frame-time is going higher than 16.66ms on occasion, and this is with just VSync turned on.

When I disable VSync and manually force the time to sleep (using the aforementioned this_thread::sleep_for) for 16ms, 16.66ms, or 16.66666666ms I get the same jitter problem. Below is my sleep code:

if(timer < 16.66)
{
   float t = (16.66666666 - timer) * 1000000;
   cout<<"Sleeping for :"<<16.66 - timer<<" ms"<<endl;
   this_thread::sleep_for(nanoseconds((long)t));
}

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

Here is the link to the source code - updated to have the frame timers

 

 

Again though, whether I have VSync enabled or not, whether it's 800fps or 60fps, and whether I implement timers to try to control the flow or not they ALL have the same stuttering problem.

Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

1

Share this post


Link to post
Share on other sites

If you skip the timer, and use VSync, and lock the frame-time you use in your simulation to always be exactly 16.666666666667 ms, does it still stutter?

 

There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

When I disable the VSync and let it run at 800fps, the difference between each frame is even more noticeable:

 

3wxuSel.png

 

 

 

As you can see, sometimes the image takes 5ms to render, sometimes it takes .6ms. 

 

There is NOTHING different that happens from frame to frame, as you can see in the github.

0

Share this post


Link to post
Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

1

Share this post


Link to post
Share on other sites


There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

0

Share this post


Link to post
Share on other sites

That's not what I mean. If you stop measuring the time and just assume it to always be 16.66666667ms, do you still notice any visible stuttering?

 

Yes, the stuttering persists even if I don't measure it or output it. 

0

Share this post


Link to post
Share on other sites

 


There's the rub though. When I disable all the sleep code, and just have VSync enabled, it caps to 60fps. But it does not run at 16.66666666ms per frame.

The image I posted a couple above shows how long each frame takes, and it is erratic. Sometimes the render takes 16ms, sometimes 16.7ms, sometimes 17ms.

 

This is to be expected (especially if Triple Buffering is enabled, which you could check in your driver settings).

 

 

Is it also expected to have the frame sometimes take 5ms and sometimes .3ms?  I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

 

Edit: Yes triple buffering is enabled and I tried disabling Intel Speedstep, virtualization, and disabled triple buffering and VSync via ~/.drirc file all to no avail.

Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

start = std::chrono::system_clock::now(); //Do work end = std::chrono::system_clock::now();

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

 

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

 

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

 

 

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

 

Also: why are you doing glClear AFTER SwapBuffers?

Edited by tonemgub
1

Share this post


Link to post
Share on other sites


Is it also expected to have the frame sometimes take 5ms and sometimes .3ms? I am checking now if triple buffering is enabled, but I do know that double is, as that is the SDL_SwapBuffers() command.

 

Absolutely. There might be a context switch and scheduler time slices are often quite long. Also, as tonemgub points out, you should really only be calling now() once per frame. (directly after swap is usually a good choice)

1

Share this post


Link to post
Share on other sites

You're just timing your "work" here, not the time each frame takes - the time between frames, which is what's important.

Try this:

static std::chrono::time_point<std::chrono::system_clock> last = std::chrono::system_clock::now();

//Do work

std::chrono::time_point<std::chrono::system_clock> current = std::chrono::system_clock::now();
std::chrono::duration<float> elapsed_seconds = current - last;
last = current;

float timer = elapsed_seconds.count() * 1000f;
std::cout<< "Frame Time: " << timer << "ms\n";

This should give you a better a idea of how much time each frame takes.

 

Also, since the timer precision might not be reliable, you could try looking at the average duration of all frames - just count the frames, then divide the total time by that count. It should stay somewhere around 16.6 . If not, then something is causing some of your frames to be dropped.

 

I think I saw the same kind of stuttering with Direct3D once - in my case it was because I was doing some double-precision calculations, and Direct3D kept putting the FPU into single-precision mode every frame - and the FPU precision switch is apparently very costly. But AFAIK, OpenGL shouldn't suffer from this.

 

 

EDIT: I just looked at your code on github. I don't know much SDL, but I noticed you're using SDL_PollEvent to get the ESC keypress - you might want to remove that, just to be sure it's not what's causing the stutter.

 

Also: why are you doing glClear AFTER SwapBuffers?

 

 

 

Tonemgub, thanks for the response! I really appreciate everybody helping out.

 

Firstly, I switched up the timers as you suggested, but saw no big difference. I will keep it in there though as I'm sure it is at least a tiny improvement.

 

As for the double precision, I don't think I have a single double in the code, just all floats (and as you said probably not an OpenGL problem, but thank you for mentioning it and covering all possible solutions).

 

Regarding SDL - I took out the SDL_Event polling and the stuttering continued. And as for doing the glClear() after swap_buffers() - I do it because it's the same thing as doing it at the beginning of the loop at that point. If I do it right before swap_buffers it will just erase all the work that "render()" has done and will display a blank screen

0

Share this post


Link to post
Share on other sites

Another update:

 

I have compiled and executed http://lazyfoo.net/tutorials/OpenGL/06_loading_a_texture/index.php

 

This code essentially loads a '.png' file and displays it on the screen.

I modified it to then start scrolling the png from the top left to the bottom right (same as in the sample program on github) and the problem occurs in FreeGlut too........

 

This is awful. It is the EXACT same problem, so I know it can't be the SDL code. This leaves the OpenGL code or a driver issue from hell.

 

One thing that makes no sense though, is that I have compiled and run the source code from the game Gish. https://github.com/blinry/gish

This runs smoothly on my screen and it uses SDL 1.2 and OpenGL.

 

And just to re-emphasize, this problem occurs on Windows and Linux on two laptops with different intel drivers and on a desktop with intel cpu and an AMD video card. 

 

 

Can anybody confirm that the code posted to github also stutters? (if you do install sdl2 from the apt-get , the sdl2_ttf is not there, you can just remove the linking from the sconstruct as it is not used in the example anyway). The below should be all you need to install

https://github.com/martinisshaken/Sample-SDL2-OpenGL-Program

sudo apt-get install libsdl2-dev 
sudo apt-get install libsdl2-image-dev 

If you are unfamiliar with scons, all you need to do is call "scons" in the root of the directory (same spot as the sconstruct), same as you would "make"

 

 

If this same code does not stutter on anybody else's machine then I am in awe of this problem.

 

Thank you very much for everybody's continuing help. If anybody can solve this, it's the gamedev community.

Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

 

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

 

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

1

Share this post


Link to post
Share on other sites

I remember something like this from many years ago.

 

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

 

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

1

Share this post


Link to post
Share on other sites

I ran your sample (on Windows) and I don't see any stuttering, though the timings in the console log varies a lot, some are 1ms and some are 15 etc, but the actual animation seems smooth.

Not that I would be likely to notice anything wrong as the image moves so slowly and quickly goes off screen.

 

Window mode usually doesn't have perfect vsync, and often can't have perfect vsync (as it shares the sync with other apps). You have to go to exclusive fullscreen.

 

I have no idea how Linux drivers work, but as long as other games work correctly and you swap in fullscreen mode with time to spare until vsync I don't see why you would get stuttering.

 

Erik, thank you very much for running it on your system. I really appreciate the help.

 

I am definitely running in windowed mode, and I noticed when I went to fullscreen in Windows the problem almost completely disappears. In windowed mode though, the stutter can still be seen and this problem does not occur with other games. Your explanation of not being able to have perfect VSync makes sense though, thank you.

As for Linux though, it still does not explain why the stutter is so prevalent - and this is with glut or with SDL. It's like it takes the Windows problem and enhances it. And I have the latest drivers on 12.04 and on 13.04, but for 13.10 it is just rolled into the OS. I wonder if this happens on CentOS or another distro....

 

Still doesn't make sense to me why other games don't have the same problem, such as the Gish game. Same environment, with no apps running, etc.

Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

I remember something like this from many years ago.

 

Do you notice the stutter if you look away from the screen and use your peripheral vision? I seem to recall that some people are more sensitive to it than others, and that it tended to disappear as scene complexity grew.

 

Also, are you sure your monitor is at 60hz? Many run at 59 *or* 60 (my Dell can be set to 29, 30, 59 or 60) - perhaps there is a hardware mismatch somewhere.

 

Yep, definitely happens if I look away and use peripheral. Didn't know that could happen though, that's cool. I will check now to ensure that the monitor is in fact at 60hz, but if it isn't then VSync should be able to figure that out I would hope. There is something with frames being dropped though, some way some how.

 

Edit:

 

god@god-laptop:~$ xrandr
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 309mm x 174mm
   1366x768       60.0*+   40.0  
   1360x768       59.8     60.0  
   1024x768       60.0  
   800x600        60.3     56.2  
   640x480        59.9  
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
 
 
Yes it does run at 60fps, and when I try setting it to 40 it all flickers like crazy.
Edited by martinis_shaken
0

Share this post


Link to post
Share on other sites

I had stuttering issues myself, but it only happened on windowed linux and windowed AND fullscreen windows

My imperfect solution was to interpolate player camera rotation and movement (separately)

I didn't interpolate movement or rotation before, so when i did with rotation it became really smooth.

After that I just added weight to player position (very stupid 'fix',) but it actually works ok

 

like

player.xyz = oldPlayer.xyz * weight  +  newPlayer.xyz * (1.0 - weight);

 

where old and new are only updated each time the physics thread is updated

I'ts not a solution, but if it makes things smooth for you, like it did for me, at least we both know the reason smile.png

the physics thread just didn't update regularly enough because of the variable amount of background work it does and the irregularities in the update frequency

 

Also, for rotation i just interpolated pitch/yaw/roll, because that made things simpler (no need for slerp)

Edited by Kaptein
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now