Sign in to follow this  

OpenGL opengl and reuse of vboarray

This topic is 1911 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hello.
I don't understand the use in opengl of the
glVertexAttribPointer(glGetAttribLocation(shaderProgram1, "VertexPosition"), 3, GL_FLOAT, GL_FALSE, sizeof(Vertexes), BUFFER_OFFSET(sizeof(float) * 3));
and of the
glEnableVertexAttribArray(glGetAttribLocation(shaderProgram1, "VertexPosition"));
what do they precisely?
are relative to the shader or to the vbo?
because I have a vboarray and i wish use it for many render call
i would do:
[code]
//1) first use of the vboarray with shaderProgram1
glUseProgram(shaderProgram1);
glBindVertexArray(vboarray);
glDrawArrays(GL_TRIANGLES, 0, TOTAL_VERTICES);
//1) second use of the vboarray with shaderProgram2
glUseProgram(shaderProgram2)
glBindVertexArray(vboarray);
glDrawArrays(GL_TRIANGLES, 0, TOTAL_VERTICES);
[/code]
,i must call glVertexAttribPointer and glEnableVertexAttribArray for each different shader?
thanks.

Share this post


Link to post
Share on other sites
no, vertex attribs should be part of vertex arrays, you call it a VBOarray, which it's not really
its an encapsulated thingamajig that contains all thats necessary to render your data, save for uniforms (and matrices)

to create a VAO:
generate vao, vbo
bindvertexarray vao
bind vbo, upload data
enable vertex attribs
set vertex attrib pointers
unbind vao

now VBO is gone, unless you are in compatibility mode, in which case only the vertex attribs are part of the object (VAO)
VAO used to be a state object to encapsulate vertex attributes, in modern openGL it is the recipe to render the model
so, if you are in core 3.x and later: only VAO remains!

to render:
startProgram(myshader)
bindvertexarray
gldrawarrays / gldrawelements

vertex attributes (glVertexAttrib*) are what makes you able to define your own vertex structure to openGL for usage in modern shaders
you specify a type, normalization, slot number (index), and offset from VBO data relative to the CURRENTLY bound VBO
this is because you can have 10 VBOs in a single VAO, all using different attributes in your shader
usually you have only 1 VBO with all attributes interleaved, still all using different attributes in your shader

let's look at:
for one, you are saying that your vertices are starting at byte position: 3 * sizeof(glfloat), which i don't believe
i think they start at byte number 0 in your vertex structure, which is the most common position for vertex position

let's say they are located at index 0 in your shader, and they start at byte 0 offset from your vertex structure,
then location = 0, and offset = 0

glVertexAttribPointer(location, 3, GL_FLOAT, GL_FALSE, sizeof(Vertexes), (glvoid*) offset );

if that's the case, then the above line is saying:
i have bound a VAO, and i have bound a VBO. this VBO has an attribute that starts at the first byte of my data that openGL has no clue what looks like
it's located at slot: "location" because that's what openGL told me, or i told openGL (see below)
it has 3 values (vec3, or just vector), and is of type GL_FLOAT, which is GLfloat to your interface, meaning a 32bit's floating point variable of size 4 bytes

openGL now knows that to be able to access this data in your shader, it needs to access 4 * 3 bytes for each sizeof(Vertexes) just to get this data
however, the data isn't well defined yet, so it needs to know whether or not its normalized
imagine that you were passing a 4-byte color (R, G, B, A), then when you wanted to access it in your shader you got values from 0 to 255 (8 bits per channel), or 0 to 65535 (16 bits per channel), well then you'll need to convert that data to be able to properly work with it, since normalized values (0.0 to 1.0, or -1.0 to 1.0) are much easier to do calculations with!

normals is also an example that can normalized, unless you are using GL_FLOAT (if you are using floating point precision you might as well have them properly normalized yourself!)
if you were using GL_BYTE instead, then normalize should be GL_TRUE

back to your own code example:
since you specified GL_FALSE for your vertex position in the normalize parameter, openGL will not attempt to normalize your vertex position, and instead pass it as-is
however, there are many situations when you'd want to do that!

hope this helps!

as for the location of the attribs in your shader, you have 3 options:
1. use (layout = index) in vec4 in_vertex; in shader to hard-code the indexing
OR
2. use glGetAttribLocation to get index from shader, [s]which requires you to have the shader bound when doing this[/s] (not recommended)
OR
3. use glBindAttribLocation(program, index, attrib_name) just before glLinkProgram(program)

i recommend the last one [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Edited by Kaptein

Share this post


Link to post
Share on other sites
glEnableVertexAttribArray tells OpenGL to turn on a particular generic vertex attribute array. glVertexAttribPointer tells OpenGL where to start, and how to traverse memory when reading in vertex data.(either by way of a CPU memory pointer, or if a buffer is bound to GL_ARRAY_BUFFER an offset into that buffer)


For an example, let's assume you have the points of a triangle in a buffer that is laid out like so:
position = {[1.0,2.0,3.0 ][2.0,3.0,3.0 ][3.0,2.0,3.0 ]};

and a vertex shader with one of the following declared:

attribute vec3 VertexPosition; //OpenGL 2
in vec3 VertexPosition; //OpenGL 3

First off, you've got the glGetAttribLocation(shaderProgram1, "VertexPosition"); call. This is giving you the index of the generic array used to supply VertexPosition with data. Initially this array has no data bound to it, and is disabled. Suppose we do this first:

unsigned int positionLoc = glGetAttribLocation(shaderProgram1, "VertexPosition");

By calling glEnableVertexAttribArray(positionLoc), you're telling openGL to start pulling data from that generic attribute array. Currently there is no data bound to it though, so if you try to draw now you'll crash.

"glVertexAttribPointer(positionLoc, 3, GL_FLOAT, GL_FALSE, 3, 0);" fixes this, by telling OpenGL where to start in the buffer object bound to GL_ARRAY_BUFFER (at offset 0), how to interpret the values there (as floats), whether to normalize them (in this case, no), and finally how to traverse the data (as 3 sets of 3 elements).

As the post above me points out, a VAO will encapsulate all of your enable / bind calls into one nice glBindVertexArray(vaoName); call. I recommend using it, as later versions of OpenGL will not draw without a VAO on Windows. Edited by Koehler

Share this post


Link to post
Share on other sites
very thanks, but a question:
1)is possible to reuse a vao with another shader?
2)if I reuse the vao with another shader must call the glVertexAttribPointer and glEnableVertexAttribArray with the other shader?
i read about uniform buffers that can be bound to n shaders, this is they force
is the same things with vao?

Share this post


Link to post
Share on other sites
a VAO isn't bound to a specific shader, but the VAO does specify the indexes or "slots" that it will ask openGL to use when rendering
that means that the indexes specified must be the same for all the shaders (see my post how to achieve this)
but it doesn't mean that the attributes have to be the same, because you can cheat and turn on and off certain attributes that you don't want to use with different shaders

Share this post


Link to post
Share on other sites
thanks.
Then if i use the same shader program(string glsl source) but compiled and linked as another shader , the attributes(layout ) are the same(i use opengl 3.2), i can rebind the vao and draw with the second shader without problem?
By.

Share this post


Link to post
Share on other sites
That is correct, giugio. As long as you've either used glBindAttribLocation or layout specifiers in your shaders to make sure attributes in different shaders are bound to the same attribute indices, you can reuse a VAO without any problem. Remember that glBindAttribLocation must be used BEFORE linking your shader. Edited by Koehler

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now