Create a parser for blender.obj file to use in Opengl

Started by
11 comments, last by tool_2046 10 years, 5 months ago

The data is generally organized well and easily translates to graphics APIs.

Most people are going to disagree with that. The data is fine for OpenGL glBegin/glEnd code, and if you don't mind cache misses; it's absolutely lousy for vertex buffers (witness the questions on this and other sites from .obj users asking if they can have multiple index buffers).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Advertisement

The data is generally organized well and easily translates to graphics APIs. No, you aren't getting shaders here. Yes it's not binary. In my experience though, you are likely writing your own format, complete with your own exporters from whatever application you are using. OBJ may be less efficient, but it's a good starting off point.

Except it doesn't map at all to how GPUs work.

A GPU expects exactly one index per vertex that points to a complete set of vertex attributes (such as position, normal, texcoord). OBJ will use different indices for each, optimizing the file format such that identical elements have the same index.

Which means for example that on a simple textured model with smooth normals (a very common thing), three rectangles will have the same index for the normal, and the same index for the texcoord (though not necessarily the same as for the normal). In order to get OpenGL to grok this, you must replicate the normal and texcoord data (lookup using the respective indices) into a set of unique vertices, each containing the replicated data. It is a nuisance, and loading is quite slow.

It is much, much easier to use a readily available library like Assimp to load whatever format you want into GPU-compatible buffers. Optionally, you can write a tool of your own using Assimp and save those "binary chunks" to disk in your own format afterwards. When the game loads, it just loads the whole bunch of data into memory in one block, calculates a few pointer offsets, and throws everything at OpenGL. It will work, and it will work fast. Loading is defined by the amount of time it takes the disk to shove data into RAM.

Alright, my apologies. I don't disagree that using a single vertex and index buffer per subset is better than using one per positions/normals/texture coordinates. I do disagree that collapsing vertices and indices down is difficult. Identifying unique vertices is a matter of looking at the triplet of indices specifying a vertex.

Like I said, your game probably isn't going to be using any standard format natively. If you just want to load in a standard model file to mess around, writing an OBJ loader is quick to do. I honestly thought it was pretty fun and educational. Could your time be spent better elsewhere? Probably, but it's not like you are wasting a month trying to convince Collada or something to load. assimp is a fine solution. But if you want to spend a day or so, you can write an effective OBJ loader.

I do disagree that collapsing vertices and indices down is difficult. Identifying unique vertices is a matter of looking at the triplet of indices specifying a vertex.

It's not about collapsing them down though; it's that .obj has them already collapsed down in a format that's not suitable for use in vertex buffers.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I think I'm misunderstanding you. I thought the complaint with how OBJ's store vertices is that there's separate lists for positions, normals and texture coordinates. By "collapse them down" I mean make a single vertex list where each vertex has a position, normal and texture coordinate, and a single index buffer that indexes into that list.

Let's say for example that OBJ contains something like this:


Index (1,1,1) (2,1,2) (3,3,2)
Pos   (x1,y1,z1),(x2,y2,z2), ...
Norm  (nx1,ny1,nz1),(nx2,ny2,nz2), ...
Tex   (u1,v1),(u2,v2), ...

where all in Pos, Norm, and Tex are individual tuples, i.e. you do not find two positions or two normals that are the same. The three indices in Index are not always necessarily the same, such as in the second vertex with indices (2,1,2), which uses position #2 and texture coordinate #2, but normal #1.


The GPU (or OpenGL for that matter) expects that information in a representation like this:


Index  1, 2, 3, ...
Vertex [(x1,y1,z1), (nx1,ny1,nz1), (u1,u1)], [(x2,y2,z2), (nx2,ny2,nz2), (u2,u2)], [(x3,y3,z3), (nx3,ny3,nz3), (u3,u3)]

where there is only one value in Index per vertex, and where (nx2,ny2,nz2) == (nx1,ny1,nz1) and (u3,u3) == (u1,u2). However, you do not tell anyone that you know about that relationship, you simply broadcast the duplicate values to any vertex that uses them.

It is of course not too hard to convert one representation to the other, but it is still kind of awkward juggling those variable amounts of indices as you read them from a text file, and it takes considerable work both in terms of writing the code and runtime.

Which means you spend extra time writing code that makes your game load a lot slower than it could without you spending that time.

If the goal is to get a game finished, writing that code is the kind of useless-work-that-gets-you-nowhere that one usually wants to avoid when there is already a readily available, well-tested, functional library that does the job.

From an academic point of view, it might be interesting to write a vertex shader that takes multiple indices as input (via a vertex attribute) and which pulls the actual data from a set of buffer textures. That way, it would be possible to directly consume OBJ data. However, this would totally defeat the post-transform cache, so it would only work reasonably well on ATI cards (which have a poor cache implementation but compensate that with higher ALU).

This topic is closed to new replies.

Advertisement