# OpenGL OpenGL ES 2.0. How to obtain indices from a Wavefront OBJ file?

## Recommended Posts

Hello.

I'm developing an Android application with OpenGL ES 2.0.

I want to load an export made with Blender 2.49b in Wavefront OBJ format. I have identified verxtex position, normal and texture.

I want to use glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid *indices) to draw my mesh but I don't know how I can obtain the last parameter, indices, from a Wavefront OBJ file.

Are faces the 'indices' that I'm looking for?

I've found this on a forum:
faces = indices starting with 1!

What is the meaning of 'starting with 1!'?

Thanks.

##### Share on other sites
The indices are the f parameters.
They look something like this (depending on what data you exported):
f x/x/x x/x/x x/x/x

x being an arbitrary number.

The indices in OBJ files are 1-based array indices, meaning that x=1 is the first vertex/texcoord/normal.
The indices in most programming languages are 0-based array indices, meaning that x=0 is the first element.

This simply means that when you load the indices into your application, subtract 1 from them and you'll be fine on that front.

Another issue is that it exports shared data.
This means that a triangle consisting of 3 vertices, texcoords and normals might look like this:
f 1/2/4 2/10/1 3/2/4

Where every triple is v/t/n (vertex/texcoord/normal) if I remember correctly.
This doesn't work with OpenGL since it expects data to be seperated, as in
1/1/1 3/3/3 8/8/8

This problem is harder to solve - you'll need to search for vertices with different normals/texcoords and duplicate them accordingly, since they are actually different vertices.
This is why it's much easier to draw OBJ files with glBegin/End, and I can only assume OBJ files look like this because glBegin/End and the DX equivalent was the only thing back then.

As a last thought - OBJ files are a big piece of crap that I have no idea how someone was stupid enough to make up.
It's easier both for the computer and for humans to write linear data, and it would load a magnitude times faster...

If you can make a simple script for Blender, you can export something like this:
4 // number of elements-1 1 0 // vertex 0 ...-1 -1 01 -1 01 1 00 0 // texcoord 0...0 11 11 00 0 1 // normal 0...0 0 10 0 10 0 1

And if you can make that in binary, loading this file would take 3 memcpy's, instead of the hell of OBJ file retard-ness.

Another solution is to make an OBJ -> your format converter in case you don't know how to write a script for Blender.

Now seriously...what were they thinking about when making such an idiotic format?

##### Share on other sites

I've found this on a forum:
faces = indices starting with 1!

What is the meaning of 'starting with 1!'? Should I subtract one to all values?

##### Share on other sites
http://en.wikipedia.org/wiki/Obj

I assume that for every vertex the position, texcoord and the normal is stored in your obj file. your data will look something like this:

# faces
f 6/4/1 3/5/3 7/6/5
f 8/3/7 4/7/0 6/9/2
f 4/5/3 6/6/1 4/6/1
...

f v1/vt1/vn1 v2/vt2/vn2 v3/vt3/vn3
so basicly you have 3 indices for every vertex in order to use this data for rendering you need to unpack it.

Array<Vector3> vertices;
Array<Vector3> texcoords;
Array<Vector3> normals;

- read all vertices from obj and stored in vertices
- read all texcoords from obj and store in texcoords
- read all normals from obj and sotre in normals

// Now read the # faces section and get the correct v, vt, vn according to the indices. for every face you read add the data to a new list
Array<Vector3> verticesUnpacked;
Array<Vector3> texcoordsUnpacked;
Array<Vector3> normalsUnpacked;
Array<int> indicesUnpacked;

f v1/vt1/vn1 v2/vt2/vn2 v3/vt3/vn3

// Vertices for this face
Vector3 vertex1 = vertices[v1];
Vector3 vertex2 = vertices[v2];
Vector3 vertex3 = vertices[v3];

// Texcoords for this face
Vector3 tex1 = texcoords[vt1];
Vector3 tex2 = texcoords[vt2];
Vector3 tex3 = texcoords[vt3];

// Normalsfor this face
Vector3 normal1 = texcoords[vn1];
Vector3 normal2 = texcoords[vn2];
Vector3 normal3 = texcoords[vn3];

// Create new indices (this will automaticly build the new indices because the faces are now unpacked and in order.

Now your indexbuffer corresponds with the vertexbuffer, texcoordbuffer and normalbuffer.

##### Share on other sites
Is there a mistake here?

// Create new indices (this will automaticly build the new indices because the faces are now unpacked and in order.

Now your indexbuffer corresponds with the vertexbuffer, texcoordbuffer and normalbuffer.

What are you trying to do with these?

What indices(vertex, texcoord or normal indices) must I use on glDrawElements?

Thanks.

##### Share on other sites
When you are unpacking, the vertices are placed in order into the new array.
the 1st triangle has vertex indices 0, 1, 2 the 2nd triangle 3, 4, 5, the 3rd triangle 6, 7, 8 etc.

when you do this in a loop the code below will create the correct indexbuffer for the new unpacked buffer.
// unpacking #faces section

indicesUnpacked.Add(indicesUnpacked.size()); // the size of indicesUnpacked here is 0
indicesUnpacked.Add(indicesUnpacked.size()); // the size of indicesUnpacked here is 1
indicesUnpacked.Add(indicesUnpacked.size()); // the size of indicesUnpacked here is 2

// next loop/face

indicesUnpacked.Add(indicesUnpacked.size()); // the size of indicesUnpacked here is 3
indicesUnpacked.Add(indicesUnpacked.size()); // the size of indicesUnpacked here is 4
indicesUnpacked.Add(indicesUnpacked.size()); // the size of indicesUnpacked here is 5

etc

or you can use:

int size = indicesUnpacked.size();

indicesUnpacked.Add(size); // the size here is 0
indicesUnpacked.Add(size+1); // the size here is 1
indicesUnpacked.Add(size+2); // the size here is 2

##### Share on other sites
Ok. Now, I've understood.

Thanks.

##### Share on other sites
I've tried this method and it doesn't work for me.

I'm trying to draw a cube and I'm getting a flat square.

These are my vertex and fragment shader:

static const char* vertexShader = "   attribute vec4 vertexPosition;  uniform mat4 modelViewProjectionMatrix;  void main() {    gl_Position = modelViewProjectionMatrix * vertexPosition; } ";static const char* fragmentShader = "  precision mediump float;  void main() {    gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0); } ";

[Edited by - VansFannel on December 10, 2010 5:03:12 AM]

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627684
• Total Posts
2978627
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 9
• 14
• 12
• 10
• 12