Drawing one GL_TRIANGLES element with glDrawElements makes a quad

Started by
4 comments, last by CyberRascal 10 years, 8 months ago

Hey everyone, I'm basically slowly losing my mind sad.png

I have the following code:


glBindBuffer(vertices.type(), vertices.id());
  glBindBuffer(indices.type(), indices.id());
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, nullptr);
    glDrawElements(GL_TRIANGLES, indices.size() * 3, GL_UNSIGNED_INT, 0);
  glBindBuffer(indices.type(), 0);
glBindBuffer(vertices.type(), 0);

The above code outputs the below image. Vertices are a number of xyzw format vertices, and indices are three-tuples of indices. In the concrete case below, indices.size() is 1, so the call to glDrawElements is


glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);

This is confirmed by glslDevil which shows the above call. How can this possibly yield more than 1 triangle?

This is the cube model I am using (I take the 1-indexation used by obj into consideration when deserializing):


#
# object Box001
#

v  -0.2500 -0.2500 -0.7500
v  -0.2500 0.2500 -0.7500
v  0.2500 0.2500 -0.7500
v  0.2500 -0.2500 -0.7500
v  -0.2500 -0.2500 0.7500
v  0.2500 -0.2500 0.7500
v  0.2500 0.2500 0.7500
v  -0.2500 0.2500 0.7500
# 8 vertices

g Box001
#f 1 2 3
#f 3 4 1
#f 5 6 7
#f 7 8 5
#f 1 4 6
#f 6 5 1
f 4 3 7
#f 7 6 4
#f 3 2 8
#f 8 7 3
#f 2 1 5
#f 5 8 2
# 12 faces

I have a vertex shader doing perspective correction that I've cobbled together without really understanding the math, here's the shader used:


#version 430

layout(location = 0) in vec4 position;

uniform mat4 perspective;
void main()
{
    vec4 offsetPos = position + vec4(0.5f, 0.5f, 0, 0);
    gl_Position = perspective * offsetPos;
}

the uniform perspective above has the following value (when bound):


matrix<4, 4> perspective(float frustumScale, float z_near, float z_far)
{
  matrix<4, 4> mat = { 0 };

  mat[0][0] = frustumScale;
  mat[1][1] = frustumScale;
  mat[2][2] = (z_far + z_near) / (z_near - z_far);
  mat[3][2] = (2 * z_far * z_near) / (z_near - z_far);
  mat[2][3] = -1;

  return  mat;
}

As far as I know, it should be impossible for the perspective shader to make 3 vertices appear as 4? If I just render all 12 faces for the below cube, it looks alright, but the individual faces (those subject to perspective correction, that wouldn't be visible in an orthogonal projection) all look skewed. What am I doing wrong? Do you need more information? Thanks for any help, I'm really a newbie to graphics programming in general.

triangle_is_quad.png

Advertisement

I'm not sure what you're using to load the OBJ, but are you absolutely sure that indices.size() returns a 1 there? It seems like there should be 36 indices in the list. Also, there should be no need to multiply this number by three. The count parameter of glDrawElements should be the total number of indices that you want to draw. If you want to draw three, just put 3. If you want to draw all of them, use indices.size(). You shouldn't ever have to use indices.size()*3, though.

Thanks for the reply!

I am using a selfmade obj deserialiser. It handles comments, so all lines starting with # is excluded - there is currently only one face, from vertex 4 to 3 to 7 (which is 3 to 2 to 6 0-based index).

The reason I do indices.size() * 3 is that the size is actually the amount of triangles (obj faces) and I think the glDrawElements call takes the total number of indices (3 in this case).

I have verified that indices.size() is 1 and that the call matches the call in my first post. If I use for example an orthogonal face, which directly faces the 'camera' it gets rendered as a triangle correctly (which is why I suspected my perspective shader). Any ideas?

Oh yes, sorry. I missed the commented-out faces, and also the fact that you already checked the call with glslDevil. My apologies.

I haven't taken the time to go over your frustum math yet, but your intuition seems correct. I don't know how to get 4 corners out of something just by changing the projection. Is it possible that this is a long triangle that is being clipped by the near and/or far plane? That's a shot in the dark.

I think your projection matrix is wrong, maybe it turns out as "quad" because of some weird clipping or something? See if this helps for creating the projection matrix, http://www.songho.ca/opengl/gl_projectionmatrix.html

Or, you could just use some other math library before implementing your own. I'd suggest glm, http://glm.g-truc.net/

Also, you don't probably see anything if you're just using a perspective matrix to transform the vertices. You could try translating them on z-axis by -5 or something, -z is into the screen.

This is how I would do it on c++ side:


// perspective(fov, aspect, near, far)
modelViewProjection = perspective(90.0f, 1.0f, 0.1f, 100.0f) * translation(0, 0, -5);

And something like this in the vertex shader:


uniform mat4 ModelViewProjection;

in vec4 position;

void main() {
	gl_Position = ModelViewProjection * position;
}

And make sure the w is 1 in positions.

Derp

Thanks everyone, you were both right: I had defined the near-clipping plane too far away from the origin (and the rectangle was very long and narrow) which made the clipped triangle look like a side of an almost-cube.

The amount of time I spent trying different things infuriates me (4-5 hours), but at this stage every type of fiddling around teaches me something...

For example, realized that OpenGL seems to expect matrices which are column-major ordered. Also found out that it seems that switching the order of matrix multiplication for two matrices (a, b) is reversed by transposing [aT * b = (b * a)T]. Yep, being a noob equals having a great time!

Anyway, did as you (Sponji) suggested and implemented translation via the input matrix instead. Now just onto learning about quaternions, euler angles, rotation matrices and that stuff... Or is that not the right way to achieve rotation? I was thinking storing translation / rotation with my (as of yet) static models - so (x, y, z) translation and (x, y, z, w) quaternion - and applying them like perspective(...) * translation(v) * rotation(q), does that make sense?

To rotate around say the origin as pivot, you apply the rotation matrix last, which applies the actual operation first?

This topic is closed to new replies.

Advertisement