• Create Account

## Modeling and coding

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

4 replies to this topic

### #1PsionicTransvection  Members

262
Like
0Likes
Like

Posted 29 March 2012 - 01:44 AM

So my question is when you make some 3D model or some 3D animation on lets say 3D studio max and after that when you export it or something how do you actually connect it the source code of the game ?

### #2spek  Prime Members

1184
Like
1Likes
Like

Posted 29 March 2012 - 01:57 AM

In essence, a 3D file is nothing more than a bunch of vertex data, related with each other via polygons (for example triangle 3 is made of vertex 5,6 and 800). More advanced file formats may also contain info about the textures and materials being used, or a scene setup. Telling where the camera is, what lights, rotations/movements of objects, et cetera. But let's keep it simple and focus on the 3D geometry data only, thus the polygons.

"Connecting 3D files with the source code" is a bit of a strange term. What really happens is that a program, engine, editor, or whatever application, reads the 3D file. Then converts it to its own internal format, which is, again, a bunch of triangles. The conversion is often needed to optimize performance and memory usage. And possibly to discard unneeded data or to add extra data that was not present in the 3D file. For example, if I import a OBJ file, it may contain secundary UV coordinates I don't need in my game, or lacks tangents. This conversion step may takes some time, so it's not uncommon that engines save the converted object to its own fileformat first. That makes loading them faster the next time.

Once the vertexdata has been read from a file, it will be stored in the computer RAM and/or videocard memory for usage. Typically you'll send an array of vertices or triangles to a videocard in a certain way(order / format) so the videocard knows how to brew a real 3D model of it. Additional stuff like lighting it, or texturing the model is done with shaders or renderAPI (DirectX, OpenGL) commands. The vertexdata usually helps telling how the texture should be wrapped on it, by telling its UV coordinates.

Really, it's not much else than reading a bitmap image. You draw it, save it. Then a program reads the file, converts it to an internal format optimal for drawing it quickly on the screen, and stores it in the memory. Then drawing calls will fetch the data from the memory and put it on the screen.

Rick

### #3PsionicTransvection  Members

262
Like
0Likes
Like

Posted 29 March 2012 - 05:49 AM

thank you for shedding some light on mu incompetence can you suggest me some more reading about it or give a some very simple example e.g. some chunk of code

### #4spek  Prime Members

1184
Like
0Likes
Like

Posted 29 March 2012 - 07:42 AM

That's not incompetence, it's just a matter of doing and all will become clear one day ;)

What kind of model formats do you plan to use? You could start with OBJ, a widely supported, simple format. It does not support animations as far as I know though. You could also look for Collada files, which are based on XML and also pretty common nowadays. Quite a lot harder to read, although once you have a proper XML reader... MS3D (Milkshape) files are also pretty simple to read, and do support animations. Anyway, first determine your needs and the programs you use for modelling, then pick a file format. Then just check the internet for a description that tells how those files are constructed.

OBJ files are text based. I don't know the exact format from the top of my head, but if you open one in Notepad you'll see (many) lines of coordinates. For example, a cube model could have
v -1 -1 -1 left bottom rear corner
v -1 -1 +1 left bottom front corner
...
The "v lines" are Vertices, made of an X Y and Z value. In the cube case, you probably will see 8 vertices. Asides vertices, you probably also run into normals and texcoords. Normals are also XYZ values, but normalized which means they are inside the -1..+1 range. Texcoords(UV) are usually made of 2 values (S and T), the horizontal and vertical coordinate. So, just store all those coordintes into arrays. One array with vertex coordinates, another array with normals, and so on.

Next step is to combine the coordinates. Models often use "indices", those are lookup index numbers. A cube can be made of 6 polygon quads. Or 6x2 = 12 triangles. I suggest you triangulate the data for ease, as OpenGL or DirectX likes to work with triangles in the end. Now each triangle refers to 3 vertices, 3 normals, 3 texcoords, et cetera. If you didn't triangulate the model, polygons may refer to 4 vertices instead in the case of a cube.
// Read 3 vertex indices from the file
int vindex0, vindex1, vindex2;
vindex0 = ...
// Read 3 texcoord indices from the file
int tindex0, tindex1, tindex2;
...

++triCount;
triangle[ triCount ].vertex[0] = vertexArray[ index0 ];
triangle[ triCount ].vertex[1] = vertexArray[ index1 ];
triangle[ triCount ].vertex[2] = vertexArray[ index2 ];
triangle[ triCount ].texcoord[0] = texcoordArray[ tindex0 ];
... and so on...


Other (binary) formats may pack the data a bit different. For example, the program could decide the following structs are stored in the file
struct Vertex
{
float vertexPos[3];
float normal[3];
float texCoord[2];
};

struct Triangle
{
Vector vert[2]; // triangle made of 3 vertices
};

struct Model
{
int triangleCount;
Triangle tris[];
}
------------------------------
setArrayLength( model.tris. model.triangleCount );
// Read the entire triangle buffer in a single call
file.read( &model.tris, model.triangleCount * sizeOf(Triangle) );

Although the principles are mostly the same, the format can differ for each format. That's why it's important to read about the file you chose first.

Now that you have all the data loaded, you may want to convert it to your own format. This depends on what/how you want to render it. Let's say we use OpenGL and we want to use triangles, indices, normals, and texcoords. In that case you could make 4 arrays. One with the vertexcoords, one with the normals, one with the texcoords and another one with the indices. Make sure the vertices, normals and texcoords are all ordered in the same way. Thus that element X in the texcoord/normal Array belong to vertex[x] as well. Then you can draw the model like this:

// Send the normalArray to OpenGL
glEnableClientState( GL_NORMAL_ARRAY );
glNormalPointer( GL_FLOAT, 0, model.normalArray );

// Send the texcoordArray to OpenGL
glClientActiveTextureARB( GL_TEXTURE0_ARB );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer( 4, GL_FLOAT, 0, model.texcoordArray );

// Send vertex coordinates to OpenGL
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 0, model.vertexArray );

// Use indices to render the model (array with 16 bit uints)
glDrawElements( GL_TRIANGLES	  , model.triangleCount ,
GL_UNSIGNED_SHORT , model.indicesArray );

// And don't forget to disable when done
glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_NORMAL_ARRAY );
glClientActiveTextureARB( GL_TEXTURE0_ARB );
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
`
Not sure if that works with modern OpenGL anymore though, as it prefers to use VBO's instead. It's a bit more work to set that up, but more flexible and powerful at the same time. There are more ways to Rome, this is just one of them. But try to start simple to get a feeling with vertexData, indices and formatting.

Probably there are tutorials on sites like Nehe that show you how to read a file, and how to render it.

Good luck!

### #5fightergear  Members

99
Like
0Likes
Like

Posted 30 March 2012 - 11:07 AM

Wow, these are some great descriptions. Very useful information on what goes on when implementing 3D files, glad I came across this. Thanks for sharing some insight.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.