Jump to content
  • Advertisement
Sign in to follow this  
hig

OpenGL Thinking in OpenGL

This topic is 2946 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi folks, I'm a relatively experienced programmer and I'm trying to get my head around some opengl concepts. I have a moderate math background and I think I understand the basics of the principles at work. I'd really appreciate a short conversation with some of the opengl gurus here to clarify a few points. Briefly, I'm trying to understand how to implement relative movement. I've seen a number of tutorials that use the familiar sin and cos operations to calculate coordinates after a movement along a specified angle. However, it was my understanding that due to the way the coordinate system works, it is not necessary to do this manually in opengl. If the object is rotated around its own axes, then merely translating along the desired axis is all that is necessary. Is this correct or am I barking up the wrong tree here? I have been able to load a model, rotate on own axes and display. The problem comes when I try to move the object "forward" in the scene, followed by further rotations. 1. Rotate object to "point" a specified direction in the scene. [works] 2. Move the object "forward" [works] 3. Rotate again - this time the rotation occurs around the center of the screen and not the model space. I am thinking that somehow I have to translate to the center of the model before rotating, and the first rotations work by coincidence because before any movement: center of model == center of screen. However, regardless of the order I issue the transformations, I just can't seem to get it right. When the rotation is always around the model axes then the translation is not in the direction the model is facing and vice versa. Here's my draw function:
    //  this sets the model position for the new frame
    model->Update();

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glPushMatrix();
    glLoadIdentity();   
    
    //  rotate the model to orientate correctly
    glRotatef(-90, 1, 0, 0);
    glRotatef(180, 0, 0, 1);

    //  rotations derived from user input
    glRotatef(model->rotation.x, 1, 0, 0);
    glRotatef(model->rotation.y, 0, 1, 0);
    glRotatef(model->rotation.z, 0, 0, 1);

    //  move the object to its current position
    glTranslatef(model->position.x, model->position.y, model->position.z);

    // draw the vertices
    model->Draw();

    glPopMatrix();
    glutSwapBuffers();


Any pointers would be greatly appreciated! FYI the model is a spacecraft and I am trying to get it to fly in the direction it is pointing.

Share this post


Link to post
Share on other sites
Advertisement
Try this:


// this sets the model position for the new frame
model->Update();

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();

// move the object to its current position *been moved
glTranslatef(model->position.x, model->position.y, model->position.z);

// rotate the model to orientate correctly
glRotatef(-90, 1, 0, 0);
glRotatef(180, 0, 0, 1);

// rotations derived from user input
glRotatef(model->rotation.x, 1, 0, 0);
glRotatef(model->rotation.y, 0, 1, 0);
glRotatef(model->rotation.z, 0, 0, 1);



// draw the vertices
model->Draw();

glPopMatrix();
glutSwapBuffers();







The order of matrix multiplications matter. You want to rotate on the origin first, but by translating you moved it off the origin, and then rotated it.

The matrix multiplications are associative. e.g. (a*b)*c = a(b*c) or a*(b*(c*d)) = ((a*b)*c)*d

You should look it up if you don't know what it is, as it will help you understand the multiplications.

Edit:

To fly in the direction, it would be easier to have a vector pointing the direction, and just add that to the model->position.

Also think of the lowest matrix calls acting first, e.g. rotating or translating etc.

[Edited by - andrew111 on April 30, 2010 11:10:32 AM]

Share this post


Link to post
Share on other sites
Hey thanks for such a quick reply. That is exactly what I had tried. With the translation called first, yes the model always rotates correctly on its axes - but then it always moves "down" the screen regardless of its rotation! (I guess because the translation is now relative to the world coordinate system and not the model)

I feel a "d'oh!" moment coming on. I'm sure I must be doing something stupid!

Share this post


Link to post
Share on other sites
Here are a few bits of information that might be helpful...

First of all, the problems you're talking about aren't really OpenGL-specific. It's understandable why one might think they are, but this is really just math that we're talking about, which is essentially the same regardless of what API you're using (OpenGL, Direct3D, etc.).

There are some (sort of) API-specific things that can hang people up though, and one of these is the way in which OpenGL's 'transform' functions operate. Before I continue though, let me point out that the functions you're using are actually deprecated and are more or less irrelevant in modern OpenGL code (typically, you would instead build these transforms yourself and then pass them in as shader parameters).

In any case, one of the most common causes of confusion when working with OpenGL is transform order. As I'm sure you know, matrix multiplication is not commutative, and the order in which a sequence of transforms is applied matters. For example, a rotation followed by a translation will generally have a different result than a translation followed by a rotation. Anytime someone says that they're expecting a model to rotate about its own origin but it instead appears to be rotating about some other point, you can bet that they're applying the transforms in the wrong order, and looking at the bit of code you posted, that appears to be the case here.

I think if you move the glTranslatef() call to before the calls to glRotatef(), you'll find that the behavior is correct (or at least closer to what you're expecting).

I'm sure someone's already pointed that out by now (in fact, after refreshing, I see that they have :), but I'll go ahead and elaborate a bit on this.

You may already know this, but the order in which matrices must be multiplied to yield a given effect depends on whether row or column vectors are being used. In short, when row vectors are used, the matrix product A*B applies the transforms in the order A->B, and when column vectors are used, the transforms are applied in the order B->A.

OpenGL (or the OpenGL fixed-function pipeline at least) has traditionally been thought of as using column vectors, although I don't know whether this really has any practical significance. What is significant though is that the most recent transform command issued is 'closest to' the vector to which it is to be applied, so to speak. This means that transforms are actually applied in the reverse of the order in which they appear in your code, and is why the translation actually needs to come first in your example in order to have the effect of a rotation followed by a translation.

As mentioned previously though, this really isn't the way things are done anymore. In modern OpenGL, you would typically build these transforms yourself, and then simply upload them to OpenGL as shader parameters. On your side of things, you can use whatever convention you want (e.g. row or column vectors), so long as the matrix data ends up in a form that OpenGL can understand.

Share this post


Link to post
Share on other sites
As for your follow-up post, can you post your revised rendering code, along with the code you're using to update the object's position?

Share this post


Link to post
Share on other sites
Hi thanks for the info. I'm certainly no math expert but I do understand the concepts you are raising. And I understand that the gl functions are merely helpers to construct a matrix, which is then applied to the vertices. I had read that they are now depreciated and the modern approach is to construct your own transformation matrices, however I thought it could still be a good approach to learn the basics.

I altered my drawScene function precisely as suggested by the poster above, and my code to update the model position merely increases its "position.y". The reason for that is because in the local model coordinate system, it "points" along its y axis and therefore I concluded that "forward" for the model is along the y axis. This is probably my problem, right? Although I can't quite get my head around why.

I'm assuming then that I have to do something a little more sophisticated to the model position vector?

Many thanks for the help, greatly appreciated!

Share this post


Link to post
Share on other sites
Quote:
Original post by hig
Hi thanks for the info. I'm certainly no math expert but I do understand the concepts you are raising. And I understand that the gl functions are merely helpers to construct a matrix, which is then applied to the vertices. I had read that they are now depreciated and the modern approach is to construct your own transformation matrices, however I thought it could still be a good approach to learn the basics.
Oops - well I guess that post was mostly redundant then :)
Quote:
I altered my drawScene function precisely as suggested by the poster above
I suggested that as well.
Quote:
and my code to update the model position merely increases its "position.y". The reason for that is because in the local model coordinate system, it "points" along its y axis and therefore I concluded that "forward" for the model is along the y axis. This is probably my problem, right? Although I can't quite get my head around why.

I'm assuming then that I have to do something a little more sophisticated to the model position vector?
Right, that won't do what you're expecting. Why it won't do what you're expecting is actually a little tricky to explain, but here goes.

To keep things simple, let's assume that your model transform only incorporates rotation and translation (which is fairly common, and appears to be the case here, judging from your code).

These two transforms (rotation and translation) are typically applied in the order rotation->translation. If you hold some object in your hand (e.g. a toy car, spaceship, or whatever), you can visualize this by starting at a 'default' position and orientation ('identity' in matrix terms), then rotating it to the orientation that you want, and then translating it to the position that you want.

Note that because the rotation occurs first, the translation is unaffected by the rotation. If the translation is +5 along the y axis, then the model will be first rotated to the desired orientation, and then moved 5 units along the (world) y axis.

In order to get the model positioned where you want, your position variable (model->position in your code) needs to represent the object's actual position in world space. If all you ever do is move this position vector along the y axis, then your model will always be positioned somewhere along the (world) y axis (it won't, in general, move 'forward' based on the object's orientation).

To get the kind of motion you want, you need to translate the position vector not along the y axis (or whatever), but rather along the object's current direction vector(s).

There are a few ways to get these direction vectors, but the easiest way is probably to build a transform matrix for the object and then extract the direction vectors from the first three rows or columns of the matrix. (This can be done using OpenGL functions, but it's more straightforward if you use your own math code or a third-party math library for this.)

Share this post


Link to post
Share on other sites
That helps a lot, thanks. My main misconception was that opengl inherently handles this "relative" movement internally, knowing the "local" orientation of the vertices being drawn.

Is it then analogous to the operations required in 2d space to translate at an angle? i.e sine and cosine calculations? I assume it must be, only performed directly with matrices?

Although you're right, and the discussion is moving further away from opengl and more towards independent math topics!

But I now know the specific area that I need to investigate more is the general math behind it and not specifically the opengl API.

Thanks again.

Share this post


Link to post
Share on other sites
Quote:
My main misconception was that opengl inherently handles this "relative" movement internally, knowing the "local" orientation of the vertices being drawn.
Yup, no such luck :) OpenGL neither knows nor cares about that stuff - it has no concept of 'object motion', 'local orientation', or anything of that sort. In the fixed-function pipeline, it just applies a series of transforms in matrix form to the input data (the programmable pipeline doesn't even do that much - with the PP, transforming of the input data is entirely up to you).
Quote:
Is it then analogous to the operations required in 2d space to translate at an angle? i.e sine and cosine calculations?
Yes, it can be. In 2-d, you typically see something like this (or a vector version thereof):
x += cos(angle) * speed * dt;
y += sin(angle) * speed * dt;
If you're dealing with Euler or spherical angles, you can do something similar in 3-d as well.
Quote:
I assume it must be, only performed directly with matrices?
You can use either matrices or angles in both 2-d and 3-d. In 3-d especially, using matrices is more straightforward, IMO, and is the approach I'd recommend. (What I'm referring to here is building a transform matrix for the object, and then extracting the direction vectors from the matrix. Note that once you have the transform matrix, it can be uploaded directly to OpenGL via glLoad/MultMatrix*(), allowing you to bypass convenience functions such as glRotate*() entirely.)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By LukasBanana
      Hi all,
       
      I'm trying to generate MIP-maps of a 2D-array texture, but only a limited amount of array layers and MIP-levels.
      For instance, to generate only the first 3 MIP-maps of a single array layer of a large 2D-array.
       
      After experimenting with glBlitFramebuffer to generate the MIP-maps manually but still with some sort of hardware acceleration,
      I ended up with glTextureView which already works with the limited amount of array layers (I can also verify the result in RenderDoc).
      However, glGenerateMipmap (or glGenerateTextureMipmap) always generates the entire MIP-chain for the specified array layer.
       
      Thus, the <numlevels> parameter of glTextureView seems to be ignored in the MIP-map generation process.
      I also tried to use glTexParameteri(..., GL_TEXTURE_MAX_LEVEL, 3), but this has the same result.
      Can anyone explain me how to solve this?
       
      Here is an example code, how I do it:
      void GenerateSubMips( GLuint texID, GLenum texTarget, GLenum internalFormat, GLuint baseMipLevel, GLuint numMipLevels, GLuint baseArrayLayer, GLuint numArrayLayers) { GLuint texViewID = 0; glGenTextures(1, &texViewID); glTextureView( texViewID, texTarget, texID, internalFormat, baseMipLevel, numMipLevels, baseArrayLayer, numArrayLayers ); glGenerateTextureMipmap(texViewID); glDeleteTextures(1, &texViewID); } GenerateSubMips( myTex, GL_TEXTURE_2D_ARRAY, GL_RGBA8, 0, 3, // only the first 3 MIP-maps 4, 1 // only one array layer with index 4 );  
      Thanks and kind regards,
      Lukas
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By iArtist93
      I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
      1- First pass = depth
      2- Second pass = ambient
      3- [3 .. n] for all the lights in the scene.
      I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
      But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
      Is there anything wrong with those steps or is there any improvement to this process?
    • By babaliaris
      Hello Everyone!
      I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far.  In order to not say to much, i made a video in which i'm showing you the behavior of the rendering.
      Video: 
       
      What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else?
       
      Take a look at my code:
      Sprite Class
      (You mostly need to see the Constructor, the Render Method and the Move Method)
      #include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; }  
      Renderer Class
      #include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); }  
      Shader Code
      #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others.
       
      Window Class:
      #include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); }  
      Brain Class
      #include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!)
      #include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; }  
      Main:
      #include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; }  
       
      I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files.
      Brain.cpp
      Error.cpp
      IndexBuffer.cpp
      Input.cpp
      Renderer.cpp
      Scene.cpp
      Shader.cpp
      Sprite.cpp
      Texture.cpp
      VertexArray.cpp
      VertexBuffer.cpp
      VertexBufferLayout.cpp
      Window.cpp
      Brain.h
      Error.h
      IndexBuffer.h
      Input.h
      Renderer.h
      Scene.h
      Shader.h
      SpaceShooterEngine.h
      Sprite.h
      Texture.h
      VertexArray.h
      VertexBuffer.h
      VertexBufferLayout.h
      Window.h
    • By Cristian Decu
      Hello fellow programmers,
      For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues
      can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets.
       
      I have used the classical quadtree LOD approach;
      I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0).
      Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back),
      creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally
      normalize all the vertices on my vertex shader i can get a perfect sphere.
      T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0)); sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT)); T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0)); sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK)); // So on and so forth for the rest of the sides As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye;
      the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye.
      The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations).
      The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as `r_Grid'.
      // spherify vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz); gl_Position = r_ModelViewProjection * vec4(V, 1.0); The `r_ModelViewProjection' matrix is generated on the CPU in this manner.
      // No the most efficient way, but it works. glm::dmat4 Camera::getMatrix() { // Create the view matrix // Roll, Yaw and Pitch are all quaternions. glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw); // The model matrix is generated by translating in the oposite direction of the camera. glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position); // Projection = glm::perspective(fovY, aspect, zNear, zFar); // zNear = 0.1, zFar = 1.0995116e12 return Projection * View * Model; } I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible.
      Each frame i'm rendering each node by sending the generated matrices this way.
      // set the r_ModelViewProjection uniform // Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet. Shader::setUniform(0, Camera::getInstance()->getMatrix() * mRadiusMatrix); // set the r_Grid matrix uniform i created earlier. Shader::setUniform(1, r_Grid); grid->render(); My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve;
      Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface
      of the planet the rounding errors start to kick in giving me that lovely stairs effect.
      I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively
      getting rid of those rounding errors.
       
      My question is how can i achieve this relative to camera rendering in my scenario here?
      I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing.
      I only use double on the CPU side where i also do most of the matrix multiplications.
      As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords).
       
      Thank you for your suggestions!
       
    • By mike44
      HI
      I've a ok framebuffer looking from above. Now how to turn it 90' to look at it from the front?
      It looks almost right but the upper colors look like you're right in it. Those should be blue like sky.
      I draw GL_TRIANGLE_STRIP colored depending on a height value.
      Any ideas also on the logic? Thanks
    • By DelicateTreeFrog
      I have a 9-slice shader working mostly nicely:

          
      Here, both the sprites are separate images, so the shader code works well:
      varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels).
      This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet.

      Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!
    • By hellgasm
      Hello,
      I have a question about premultiplied alpha images, texture atlas and texture filter (minification with bilinear filter).
      (I use correct blending function for PMA and all my images are in premultiplied alpha format.)
      Suppose that there are 3 different versions of a plain square image:
      1. In a separate file, by itself. No padding (padding means alpha 0 pixels in my case).
      2. In an atlas in which the subtexture's top left corner(0, 0) is positioned on top left corner(0, 0) of atlas. There are padding on right and bottom but not on left and top.
      3. In an atlas, in which there are padding in each of the 4 directions of the subtexture.
      Do these 3 give the same result when texture is minified using Bilinear filter? If not I assume this means premultiplied alpha distorts images since alpha 0 pixels are included in interpolation. If so why do we use something that distorts our images? 
      And even if we don't use premultiplied alpha and use "normal" blending, alpha 0 pixels are still used in interpolation. We add bleeding to overcome problems caused by this (don't know if it causes exact true pixel rendering though). So the question is: Does every texture atlas cause distortion in contained images even if we use bilinear (without mipmaps)? If so why does everyone use atlas if it's something that's so bad?
      I tried to test this with a simple scenario but don't know if my test method is right.
      I made a yellow square image with red border. The entire square (border included) is 116x116 px. The entire image is 128x128. 
      I made two versions of this image. Both images are in premultiplied alpha format.
      1st version: square starts at 0, 0 and there are 12 pixels padding on bottom and right.
      2nd version: square is centered both horizontally and vertically so there are 6px padding on top, left, bottom and right.
      I scaled them to 32x32 (scaled entire image without removing padding) using Bilinear filter. And when rendered, they both give very very different results. One is exact while the other one is blurry. I need to know if this is caused by the problem I mentioned in the question.
      Here are the images I used in this test:
      Image (topleft):


      Image (mid):


      Render result:

      I try to use x, y and width, height values for sprite which match integers so subpixel rendering is not intended.
    • By KKTHXBYE
      So, algorithm looks like this:
      Use fbo
      Clear depth and color buffers
      Write depth
      Stretch fbo depth texture to screen size and compare that with final scene
       
      GLSL algo looks like this:
      Project light position and vertex position to screen space coords then move them to 0..1 space
      Compute projected vertex to light vector
      Then by defined number of samples go from projected vertex position to projected light position:
      - Get the depth from depth texture
      - unproject this given texture coord and depth value * 2.0 - 1.0 using inverse of (model*view)*projection matrix
      Find closest point on line (world_vertex_pos, wirld light pos) to unprojected point, if its less than 0.0001 then i say the ray hit something and original fragment is in shadow
       
      Now i forgot few things, so i'll have to ask:
      In vertex shader i do something like this
      vertexClip.x = dp43(MVP1, Vpos); vertexClip.y = dp43(MVP2, Vpos); vertexClip.z = dp43(MVP3, Vpos); vertexClip.w = dp43(MVP4, Vpos); scrcoord = vec3(vertexClip.x, vertexClip.y, vertexClip.z); Where float dp43(vec4 matrow, vec3 p) { return ( (matrow.x*p.x) + (matrow.y*p.y) + (matrow.z*p.z) + matrow.w ); } It looks like i dont have to divide scrcoord by vertexClip.w component when i do something like this at the end of shader
      gl_Position = vertexClip; and fragments are located where they should be...
      I pass scrcoord to fragment shader, then do 0.5 + 0.5 to know from which position of the depth tex i start to unproject values.
      So scrcoord should be in -1..1 space right?
      Another thing is with unprojecting a screen coord to 3d position:
      So far i use this formula:
      Get texel depth, do *2.0-1.0 for all xyz components
      Then multiple it by inverse of (model*view)*projection matrix like that:
      Not quite sure if this isneven correct:
      vec3 unproject(vec3 op) { vec3 outpos; outpos.x = dp43(imvp1, op); outpos.y = dp43(imvp2, op); outpos.z = dp43(imvp3, op); return outpos; }  
      And last question is about ray sampling i'm pretty sure it will skip some pixels making shadowed fragments unshadowed.... Need somehow to fix that too, but for now i have no clue...

      vec3 act_tex_pos = fcoord + projected_ldir * sample_step * float ( i );
       
       
      I checked depth tex for values and theyre right.

      Vert shader


  • Forum Statistics

    • Total Topics
      631073
    • Total Posts
      2997754
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!