Jump to content
  • Advertisement
Sign in to follow this  
Prune

OpenGL Hybrid vsync

This topic is 3008 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

From http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=196709&page=3
Quote:
Just for the record - for my personal, immediate purposes, the following setup seems to work fine :)
if( dt>24) {
  if(vsync) {
    wglSwapIntervalEXT(0);
    vsync=false;
  }
}
else if( dt<12){
  if(!vsync) {
    wglSwapIntervalEXT(1);
    vsync=true;
  }
}
SwapBuffers();
dt being the time between the start of the last render, and the start of the current - ie. one frame lag.
Is this still helpful if triple-buffering is enabled?

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Prune
Is this still helpful if triple-buffering is enabled?


Define "helpful". In your application when you try it do you have better visual results? If so, yes, if not, no.

-me

Share this post


Link to post
Share on other sites
What is that code supposed to?(Fixing frame rate?) And why do you think it works? Depending on the driver settings wglSwapIntervalEXT may or may not affect v-sync.
Also as end used I would not happy to see an application messing up with v-sync like that since I often want it to be enable to overcome screen tearing.

Share this post


Link to post
Share on other sites
I don't think that snippet is any helpful at all, and has never been. At least, not without a lot of "ifs" and "whens".

The problem with VSYNC is that it will only allow refresh/N frames per second (so, in the case of a 60Hz refresh, it will allow for 60, 30, 20, 15, 12, 8, ... fps). On the positive side, it avoids tearing artefacts, and for many games 1/2 the refresh rate is still quite acceptable.

Now, it may happen that your rendering is a bit too slow, maybe running at just 59 fps, which would be acceptable, but due to VSYNC it will drop to 30 fps which is not acceptable (maybe in a fast-paced ego-shooter).

This code snippet tries to deal with the problem by collecting some more or less reliable data (last frame time), compare it to some hardcoded value and turn VSYNC on/off on the fly.

Here's is the first flaw already. Not only is 24 ms a really weird frame time, this corresponds to something like 42 fps -- few if any devices will have a refresh rate like this -- but also you don't really know the real refresh rate, so hardcoding it is a kind of stupid idea. Typical monitors can have refresh rates going from 50 to 120 Hz with maybe a dozen steps in between. So... 24 may be good for one particular monitor, but that's about it.
Also, different games have different fps needs, so regardless of the actual hardware, this number may not be the one you want. For a top-down strategy game, 15 fps may be perfectly acceptable. For a fast ego-shooter, 60 fps may not be enough.
Plus, measuring the frame time in a reliable and precise way is not quite trivial, either. And, whatever you measure is (obviously) after the fact. It may be better to avoid the problem in the first place.

I think the right solution here is to let the user choose. Usually I hate it when people just play the "let the user choose" card, but in this case, I think it is really justified.
Not only is this much less work and trouble, but it also is a lot more reliable. And, it accounts for the fact that different people are... well... different. Some people maybe don't mind a bit of tearing but really hate low frame rates. And then, tearing might annoy the hell out of others. You don't know.
Lastly, you don't even know if you can change VSYNC at all. Most drivers allow the user to turn VSYNC on/off globally, so whatever you do might not be good for anything after all.

Share this post


Link to post
Share on other sites
Quote:
Just for the record - for my personal, immediate purposes, the following setup seems to work fine :)

if( dt>24) {
if(vsync) {
wglSwapIntervalEXT(0);
vsync=false;
}
}
else if( dt<12){
if(!vsync) {
wglSwapIntervalEXT(1);
vsync=true;
}
}
SwapBuffers();


dt being the time between the start of the last render, and the start of the current - ie. one frame lag.
That is very interesting. I had the same question with no answers a few months ago, and I came out with almost the same idea, but for some reason, I could never get it to work. V-sync never got switched on, or never got switched off, no matter what values I used it the else-if conditions (I also had the two different values to introduce some kind of hysteresis), just didn't work.
So I decided to go the user selection way, and the user can switch to lower resolution, decide how many particles are drawn etc.

Share this post


Link to post
Share on other sites
Quote:
Original post by samoth
This code snippet tries to deal with the problem by collecting some more or less reliable data (last frame time), compare it to some hardcoded value and turn VSYNC on/off on the fly.

Of course, it is trivial to extend to some statistic measure relying on a few past frames, rather than a single frame.

Quote:
Here's is the first flaw already. Not only is 24 ms a really weird frame time, this corresponds to something like 42 fps -- few if any devices will have a refresh rate like this -- but also you don't really know the real refresh rate, so hardcoding it is a kind of stupid idea.

I am myself very curious about this choice. As I wrote in the first post, I copied this snipped from elsewhere. The following comment is one of the replies on that forum:
Quote:
Mikkel Gjoel: you are right, you snippet works very well !
This is really nice to the eye, even with black+white vertical stripes in variable high-speed horizontal scroll, my worst case as far as display refresh is concerned.
I don't get why your first threshold value works so well, but I could not get better results with a smaller value. However 85 Hz monitor would mean around 11-12 milliseconds, not 24 right ?

There was no explanation in reply though; I myself posted in the thread but got no answer of any sort.
I cannot begin to imagine the reason for the choice, whether latencies for wglSwapIntervalEXT's effect come into play, or what... I'd love to hear some suggestions. Perhaps the choice is simply empirical, but given that a different random user also found this the smallest value that made a difference, it's unlikely that the writer of the snipped had some unusual refresh rate.

Quote:
And, whatever you measure is (obviously) after the fact.

That's hardly specific to this issue. Consider a physics simulation where you're determining the delta-t. There is at least a one-frame lag, some smoothness of the dt variation over most frames is always assumed since you cannot predict discontinuous events that would cause a sudden change (since they usually result from user action or arise from system complexity). This is a general aspect of any feedback-based controller system, where the variable is not necessarily just time but could be temperature or whatever.

Quote:
I think the right solution here is to let the user choose. Usually I hate it when people just play the "let the user choose" card, but in this case, I think it is really justified.
Not only is this much less work and trouble, but it also is a lot more reliable. And, it accounts for the fact that different people are... well... different. Some people maybe don't mind a bit of tearing but really hate low frame rates. And then, tearing might annoy the hell out of others. You don't know.
Lastly, you don't even know if you can change VSYNC at all. Most drivers allow the user to turn VSYNC on/off globally, so whatever you do might not be good for anything after all.

I'm lucky to be writing for a hardware and software platform that I specify (touchscreen kiosks), which removes these concerns. But even someone without that privilege can provide the user with such an option in addition to having vsync always on or always off, maybe with even tunable threshold dt. (By the way, the NVIDIA driver lets one to force vsync on and off, but also has a setting allowing the application to select it.) So I would really like to get a bit deeper into this issue in order to get best results.

This algorithm is, as you explained, geared to prevents halving of framerate for cases where some portion of frames might exceed one refresh time. The problem of course is that those frames would appear with tearing, so it's a specific compromise between lag to next frame display and visual artifacts.
This got me into thinking about the other solution, triple buffering, which presents a different compromise--adding a lag time of an extra frame to act as a safety margin for the cases where some frames take longer than a vertical retrace. But wouldn't it make sense, in cases where the draw takes less than half of a vertical refresh, to dynamically switch to double-buffering and thus improve latency? This would be somewhat analogous to the dynamically using vsync in the snippet above, and could be combined. But how can one turn triple buffering on/off from the application, given there seems to be no WGL/GLX extension as there is for vsync? Does NVIDIA have any way to ask the driver for it programmatically? Worst case, I'd imagine one could manually handle two back buffers in the application and turn triple-buffering off in the driver...

Then there is the question of how NVIDIA's driver setting of "Maximum pre-rendered frames" affects the above considerations...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By LukasBanana
      Hi all,
       
      I'm trying to generate MIP-maps of a 2D-array texture, but only a limited amount of array layers and MIP-levels.
      For instance, to generate only the first 3 MIP-maps of a single array layer of a large 2D-array.
       
      After experimenting with glBlitFramebuffer to generate the MIP-maps manually but still with some sort of hardware acceleration,
      I ended up with glTextureView which already works with the limited amount of array layers (I can also verify the result in RenderDoc).
      However, glGenerateMipmap (or glGenerateTextureMipmap) always generates the entire MIP-chain for the specified array layer.
       
      Thus, the <numlevels> parameter of glTextureView seems to be ignored in the MIP-map generation process.
      I also tried to use glTexParameteri(..., GL_TEXTURE_MAX_LEVEL, 3), but this has the same result.
      Can anyone explain me how to solve this?
       
      Here is an example code, how I do it:
      void GenerateSubMips( GLuint texID, GLenum texTarget, GLenum internalFormat, GLuint baseMipLevel, GLuint numMipLevels, GLuint baseArrayLayer, GLuint numArrayLayers) { GLuint texViewID = 0; glGenTextures(1, &texViewID); glTextureView( texViewID, texTarget, texID, internalFormat, baseMipLevel, numMipLevels, baseArrayLayer, numArrayLayers ); glGenerateTextureMipmap(texViewID); glDeleteTextures(1, &texViewID); } GenerateSubMips( myTex, GL_TEXTURE_2D_ARRAY, GL_RGBA8, 0, 3, // only the first 3 MIP-maps 4, 1 // only one array layer with index 4 );  
      Thanks and kind regards,
      Lukas
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By iArtist93
      I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
      1- First pass = depth
      2- Second pass = ambient
      3- [3 .. n] for all the lights in the scene.
      I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
      But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
      Is there anything wrong with those steps or is there any improvement to this process?
    • By babaliaris
      Hello Everyone!
      I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far.  In order to not say to much, i made a video in which i'm showing you the behavior of the rendering.
      Video: 
       
      What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else?
       
      Take a look at my code:
      Sprite Class
      (You mostly need to see the Constructor, the Render Method and the Move Method)
      #include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; }  
      Renderer Class
      #include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); }  
      Shader Code
      #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others.
       
      Window Class:
      #include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); }  
      Brain Class
      #include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!)
      #include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; }  
      Main:
      #include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; }  
       
      I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files.
      Brain.cpp
      Error.cpp
      IndexBuffer.cpp
      Input.cpp
      Renderer.cpp
      Scene.cpp
      Shader.cpp
      Sprite.cpp
      Texture.cpp
      VertexArray.cpp
      VertexBuffer.cpp
      VertexBufferLayout.cpp
      Window.cpp
      Brain.h
      Error.h
      IndexBuffer.h
      Input.h
      Renderer.h
      Scene.h
      Shader.h
      SpaceShooterEngine.h
      Sprite.h
      Texture.h
      VertexArray.h
      VertexBuffer.h
      VertexBufferLayout.h
      Window.h
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!