# OpenGL glDrawElements is drawing all of my vertices with one vertex at the origin *solved*

This topic is 2457 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Ok, so i've written an OBJ model loader, all seems to be well, ive tested to make sure that the arrays holding the vertex and indice information is 100% on spot, and they are, yet for some reason when i call glDrawElements, all of my triangles share one point at the origin, so instead of rendering a sphere or a cube, it ends up looking like a bloomin onion instead:

(note: im calling gldrawelements twice, once with triangles, the other with lines, this is how i saw that they all shared one origin)
Here is what my render method looks like (note i am using opengl es):
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);gl.glVertexPointer(3, GL10.GL_FLOAT, 0, fbVertices);gl.glDrawElements(GL10.GL_TRIANGLES, ((fbVertices.array().length - 1) * 3), GL10.GL_UNSIGNED_SHORT, ibIndices);gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);

You can see the specific model class here (it says its c, its actually java, codepad doesnt support java yet)

(im using eclipse, android sdk 2.1, and opengl es)

Thanks in advance for any help, im really stuck, its quite frustrating :)

Edit: I believe that my problem is where i am calling glDrawElements and defining the size, i assumed since my floatbuffer was holding an array of each vertex, i could just pass the size of that, but then it renders about 1/3 of the points, so i tryed passing fbVertices.array().length * 3, and that seems to render all but about 2 or 3 of the points, but no matter how much i play with it, trying to add 1 or subtract one, it still renders funky ???

[Edited by - Silv3rLogic on September 28, 2010 7:20:16 PM]

##### Share on other sites
Are you properly adjusting your indices to be from 0 instead of from 1? I recall OBJ the first element was indexed as '1' instead of '0', this screws up a lot of people if you just treat obj indexes as glDrawElements indices.

##### Share on other sites
Im pretty sure they indexed correctly, but i could be wrong, both the vertice and indice information are packed into buffers the same way, so they both look something like this internally:
float[] vertices = { 1.0f, 0.0f, 0.0f ... };int[] indices = { 1, 2, 3 ... };

##### Share on other sites
Well I don't really follow what you were trying to say with your last post, but let me elaborate.

In your obj file you have a bunch of vertices

v (first)
v (second)
v (third)
v (fourth)
v (fifth)

To draw a triangle with the first three vertices, you get a face argument something like this:

f 1// 2// 3//

If you grab those face indices and throw them into opengl, you're getting something totally different. You upload your array of five vertices, and then you say "hey draw a triangle with indices 1 2 and 3"

However vertex[1] in opengl represents the SECOND vertex in your list, where you really meant the first vertex (which is index 0). So if you send the indexes from OBJ to opengl all your vertices will be wrong by 1, and you'll get the kind of jumbled polygon mush that you see in your images.

Basically when you read indexes from OBJ, subtract 1 from them.

##### Share on other sites
I was just trying to describe how i parse the obj file and store the coordinates in arrays, i understand what you are saying, but I just can't get it to want to work right, i tried offsetting the indices by -1, and now my cube looks like a paper airplane. The strange thing is, it renders the first 2 or 3 triangles just fine, then the rest are all jumbled.

This is basically what my engine is doing:

> OBJ loader opens OBJ file, parses it line by line
> when a line beginning with "v" is found, it adds a float[] containing the 3 vertex values to an arraylist
> same thing happens with the indices, when a line beginning with "f" is found, it parses the first value (index), so if i had
f 1//3 2//5 3//7
it would add { 1, 2, 3 } to the arraylist
> when the file is done being parsed, a floatbuffer is built by iterating through the arraylist, adding each individual value (so essentially its taking the arraylist and building it into one array containing all of the values in order), same thing happens with the indices (except its an intbuffer)

these two buffers (essentially just arrays containing all of the values in order) are passed to opengl

##### Share on other sites
The count that you pass to glDrawElements should be ibIndices.array(). length surely? DrawElements takes the number of indices, not vertices.

Have you tried with a simpler OBJ file of a quad? Maybe try that and paste the index and vertex lists here for us to checkout. (assuming the quad also fails to draw correctly)

Also, your index array appears to be an int[] wheres your call to glDrawElements states that the indices are GL_UNSIGNED_SHORT. This will cause it to read 2-bytes per index rather than 4.

##### Share on other sites
Okay, i changed my code a little, I am now passing the size of the indice array rather than the vertice array (it seems to be rendering 1/3 of the model now). I also changed all of the int[] arrays to short[] arrays, and now am passing GL_UNSIGNED_BYTE rather than GL_UNSIGNED_SHORT, but the results are the same.

I also created a simple triangle.obj file
# Simple Wavefront filev 0.0 0.0 0.0v 0.0 1.0 0.0v 1.0 0.0 0.0f 1 2 3

This seems to render just fine, yet my cube will not:

cube.obj
# Blender3D v249 OBJ File: # www.blender3d.orgv 1.000000 -1.000000 -1.000000v 1.000000 -1.000000 1.000000v -1.000000 -1.000000 1.000000v -1.000000 -1.000000 -1.000000v 1.000000 1.000000 -1.000000v 1.000000 1.000000 1.000001v -1.000000 1.000000 1.000000v -1.000000 1.000000 -1.000000f 5 1 4f 5 4 8f 3 7 8f 3 8 4f 2 6 3f 6 7 3f 1 5 2f 5 6 2f 5 8 6f 8 7 6f 1 2 3f 1 3 4

[Edited by - Silv3rLogic on September 28, 2010 5:06:21 PM]

##### Share on other sites
Sweet, thanks rewolfer you were right, i changed all of my int[] arrays to short[] arrays, and continued to pass GL_UNSIGNED_SHORT to glDrawElements. I also had to offset the indices by -1, now it is rendering beautifully ;D

I even whipped up a pretty complex model with objects, normals, tex coords, and everything, and it is working like a charm, i knew i wasn't crazy!

[Edited by - Silv3rLogic on September 28, 2010 6:45:05 PM]

##### Share on other sites
haha awesome. good job!

##### Share on other sites
Yes, this is a very old topic but thank you guys so much. Not only for being specific in what you did to fix it but also being specific in the problem. I had all of my triangles drawing the third vertex at the origin and I was pulling my hair out all weekend.

Again, I am sorry to bump an old post but you both deserve medals.

Cheers,
Chanz

• ### Similar Content

• By plz717
Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂)
I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance!

• By nOoNEE
hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource
• By owenjr
Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.

Examples:
- Procedural multi-legged walking animation
- Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
• By Lewa
So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
http://filmicworlds.com/blog/filmic-tonemapping-operators/
http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
This is the tonemapping code:
vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
This is with the uncharted tonemapping:
Which makes the image a lot darker.
The shader code looks like this:
void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
To check this i plotted the tonemapping curve:
You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)

My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
• By nOoNEE
in the OpenGL Rendering Pipeline section there is a picture like this: link
but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu

• By Inbar_xz
I'm using the OPENGL with eclipse+JOGL.
My goal is to create movement of the camera and the player.
I create main class, which create some box in 3D and hold
an object of PlayerAxis.
I create PlayerAxis class which hold the axis of the player.
If we want to move the camera, then in the main class I call to
the func "cameraMove"(from PlayerAxis) and it update the player axis.
That's work good.
The problem start if I move the camera on 2 axis,
for example if I move with the camera right(that's on the y axis)
and then down(on the x axis) -
in some point the move front is not to the front anymore..
In order to move to the front, I do
player.playerMoving(0, 0, 1);
And I learn that in order to keep the front move,
I need to convert (0, 0, 1) to the player axis, and then add this.
I think I dont do the convert right..
I will be glad for help!

Here is part of my PlayerAxis class:

//player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1﻿; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); }﻿ x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMat﻿rix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; ﻿coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }
and in the main class i have this:

public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }
finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
• By Lewa
So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
(here is the full shader source code if someone wants to take a look at it)
Now, i suspect that the normals are the culprit.
vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
//"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?

• Hi,
I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

• I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
Regards

• I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example:
postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources.
I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though.
Another example of what I'm doing at the moment:
1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
Thanks all!

• 17
• 11
• 15
• 9
• 49
• ### Forum Statistics

• Total Topics
631393
• Total Posts
2999774
×