I'm following some new tutorials about OpenGL, since the ones I had done before were way outdated (like 2000ish), and I have some questions about the Shader x VBO relation.
My vertex shader code is this:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
//option 1
vec4 v = vec4(vertexPosition_modelspace, 1);
gl_Position = MVP * v;
//option 2
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0;
}
(Only one of the options is enabled, I just put both here to save some space)
And the drawing code is this:
glBindBuffer(GL_ARRAY_BUFFER, object->_VertexID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, object->_VertexIndexID);
glEnableClientState(GL_VERTEX_ARRAY);
//glVertexPointer(3, GL_FLOAT, 0, 0); //I was using this originally
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glDrawElements(GL_TRIANGLES, object->GetTotalIndexes(), GL_UNSIGNED_INT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
Most VBOs tutorials I find use the glVertexPointer() function, but it seems that when I have more than one array of values in the shader (like vertices, colors and normals), I need to use glVertexAttribPointer() instead, and match the first value with the location = X in the shader code.
I didn't understand very well what's this relation about, does this mean the values currently bound in the C++ code are sent to the layout(location) variable in my shader code?
Also, I've read a bunch of tutorials about OpenGL and topics about it, and I see plenty of them talking about matrix calculations and such, but I never bothered to understand what they meant. I had always reached the results I desired without them, but seeing I'm finding more and more about them in newer tutorials, I'm curious. For example, there's a tutorial that shows how to calculate the "view" of each object with matrices (and GLM):
mat4 mProjection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
mat4 mView = glm::lookAt(
glm::vec3(0, 0, 0), //Camera Position
glm::vec3(0, 0, 0), //Looking to Position
glm::vec3(0, 1, 0) //Head Up
);
mat4 mModel = glm::mat4(1.0f); //model values
mat4 mMVP = mProjection * mView * mModel;
Then I send the mMVP to the shader, and using the option1 in the shader, I get the correct view for the object(s) on screen.
However, I was achieving the same doing this:
glLoadIdentity();
glTranslatef(-_camera.targetX, -_camera.targetY, -_camera.targetZ); //Looking to Position
glRotatef(_camera.RotationX(), 1.0f, 0.0f, 0.0f); //0-360 values for rotations
glRotatef(_camera.RotationY(), 0.0f, 1.0f, 0.0f);
glTranslatef(-_camera.posX, -_camera.posY, -_camera.posZ); //camera position
//Draw Objects
And with this, I used the option2 in the shader.
On my machine, option2 was about 25% faster.
But I don't know if the shader calculations are done by the graphics card (meaning my processor was faster at calculating them) or not (then it's the reverse).
Another option I was considering was not calculating the matrices in the C++ code, but just send them to the shader and let it calculate the final options, but I'm guessing this would be even slower.
I've read that I shouldn't use FPS as a "performance" check because it's inconstant, but I don't know any other way to compare methods and figure which is better.
So, what's the generally better accepted method?
If you have arguments on why should one use matrices instead of the glFunctions (although I've read that glRotatef() is being marked as deprecated), I'd love to hear it as well. Most tutorials and books don't say which is better, it's usually "do this and you get the results you want".
I plan on developing games for mobiles so I'm really concerned about performance and also for curiosity, I'd really like to understand this better.