Advertisement Jump to content
Sign in to follow this  

OpenGL How should I store Vertex data for a mesh.

This topic is 3834 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi All, Am am a bit confused as to how I should store the verticies for a mesh. Should I have an array of verticies that are shared by polygons, and have an array of polygons that say what verticies to connect, like what you get out from a model saved in the 3ds file format? Or should I have an array of floats for the verticies in the order of the polygons? Both methods seem to have pros and cons... With the former, data is not repeated, which has always seemed like a good thing to me, but how do I throw this data at opengl. And with the latter I can easily send it to be rendered with a vertex array or whatever, but it seems stupid, because there is no way to know which verticies are shared for calculating vertex normals etc. So how do you guy's/gal's store your data, and pass it to opengl? Thanks in advance.

Share this post

Link to post
Share on other sites
The modern way to go is to use vertex buffer objects (VBOs) when it comes to rendering. Therein you have vertices and faces, the latter given either implicitely as vertex sequences or (a bot more explicit) as sequence of indices of vertices. Indexed vertices are most efficient if vertices can be shared between faces. Using any other representation means the necessity to convert.

The above works well for rendering. Animation, or even more interactive editing of meshes may make more topological information senseful to know.

In other words, the solution depends a bit on the situation.

Share this post

Link to post
Share on other sites
I'm a little torn between vertex arrays and display lists atm (though I'll end up back with VBOs in the long run).

So I'll just pick up on your vertex normal comment; you can easily calculate one by taking an average of all connecting surface normals (this will vary depending on tessellation and any boundaries).

Share this post

Link to post
Share on other sites
Thanks, I didnt know that VBO's could do sequence of indices of vertices. Sounds like that is the neat solution that I am searching for, Thanks, and I guess I should start googling.

Do you know of any good tutorials?

Thanks again.

Share this post

Link to post
Share on other sites
A vertex always describes all (needed) features of a surface at a given point. So it is a composite of the position itself, the normal and tangent and bi-normal vectors at that position, the color(s) at that position, and the texture co-ordinates at that position; just to name the most common ones. If you are using VBOs and only a single bit of all these values differs, you need to store a second vertex. The immediate mode allows you to assemble the vertex at runtime on CPU costs, but that way is slow and will be no longer part of core OpenGL with version 3 (however, it will presumbly be available in a utility library).

The mesh representation as vertices and sequences of polygon corners (either as index lists or not) still has topological information of vertices, edges, faces, and partly of shells. So you can of course find shared vertices in a VBO like representation, although it is obviously very inefficient since some dependencies are given indirectly only, and hence require the algorithm to do many searches. On the other hand, as already stated in my previous post, how often do you encounter situations in which yiu still need to access the topology that way?

The project I'm currently working on deals with both interactive editing and, of course, display of large amounts of meshes. Due to the interactive editing purposes, the project knows of a mesh structure named EditMesh. This kind of mesh has an explicit knowledge of vertices, edges, loops, faces (n-gons), holes, shells, regions, and voids as first class topological (and partly also geometrical) elements, and edge shares as well as face shares as helper elements.

During editing a mesh, the relations between the elements is explicitely given in both top down as well as bottom up order. When not being edited, roughly half of the relations are not available; when starting editing, the half is reconstructed, and when ending editing, it is dropped. Reconstruction is fast enough as long as not dealing with meshes of a million vertices. The advantage is that, although many meshes are loaded simultanuosly, the total memory consumption is relatively small since only one (or at most a few) meshes are edited at a time. Actually, the EditMesh is not VBO friendly, since it is a kind of multi-indexed array set.

Hence the above mesh representation is not suitable for fast rendering. For this purpose a totally other kind of meshes exists, namely the RenderMesh class. Whenever needed, the faces of the EditMesh are processed (i.e. triangulated), and a RenderMesh is computed from them. The RenderMesh, you assume it already, is very VBO friendly.

The computation of the RenderMesh is backed by the EditMesh, and hence has all geometrical and topological information at hand, also those used to decide how normals are to be computed (i.e. which faces contribute to what vertex normal). Although often only "the mesh is smoothed or not" may be used, the EditMesh allows a finer granularity. However, after the RenderMesh is computed, in many cases there is no more need to have the EditMesh at hand up until the next editing. So, in the "compiled" level, EditMeshes are very rare, but RenderMeshes are found at every corner.

Well, many words to show just "use whatever is suitable in a given situation".

Share this post

Link to post
Share on other sites

I think I understand what your saying, and what I want to know is what sort of data is stored in your RenderMeshes class, is it just the the arrays of vertex sequences?



Share this post

Link to post
Share on other sites
_What_ is stored is principally: The vertex data (whatever is needed from the position, normal, ...), zero or more index arrays, and the kind of primitive for each (index) array, e.g. TRIANGLES or LINES. These informations are used to fill-up a GfxRenderingJob, actually just a vehicle to be able to sort rendering due to shader and material settings; I'm sure you already have read about this way.

If you're interested in _how_ it is stored: The nitty-gritty details are numerous...

Since the shaders may need various data, it cannot always be forecasted what compositions of vertex data need to be passed to the API. Even if it were possible, then the number of combinations may be too great. Hence, the vertex data is actually stored in "unstructured" octet arrays, overlayed by (more or less) primitive types as and when needed. The description how the overlays are to be done is stored in some metadata. E.g. the byte offset, primitive data type, and semantics is stored this way. (For performance reasons, I don't deal with one such array for each mesh but with several and bigger arrays, but you don't necessarily need care about that ATM.) However, all those stuff is later used when the renderer actually performs a GfxRenderingJob, for parametrizing the various glXxxPointer and related routines.

Since situations exists where a mesh is so-called "static", presumbly a copy will be made in VRAM or so (see the various GL_STATIC_DRAW, GL_DYNAMIC_DRAW, GL_STREAM_DRAW,... modes). The renderer needs to know whether such a copy exists, or whether it is required to refresh the copy from the main memory. It furthur needs to know which VBOs are associated with a mesh. These informations are also available from the RenderMesh class (although not necessarily directly).

Share this post

Link to post
Share on other sites
I store all my meshes as a vertex array and a triangle array.
My triangle structure also holds references to neighbouring triangles, that makes it easier to calculated vertex normals and to do efficient ray casting (e.g.: using plücker coordinates I only test on edge per triangle)

That also allows you to apply some operations like edge collapse to reduce redundant triangles

15 class Triangle
16 {
17 uint m_Vertex[3];
18 int m_Neighbour[3];
19 public:
20 Triangle(uint a=0, uint b=0, uint c=0)
21 {
22 m_Vertex[0] = a;
23 m_Vertex[1] = b;
24 m_Vertex[2] = c;
25 m_Neighbour[0] = -1;
26 m_Neighbour[1] = -1;
27 m_Neighbour[2] = -1;
28 };
29 const int& Neighbour(uint i) const { return m_Neighbour; };
30 int& Neighbour(uint i) { return m_Neighbour; };
31 const uint& Vertex(uint i) const { return m_Vertex; };
32 uint& Vertex(uint i) { return m_Vertex; };
33 };

Share this post

Link to post
Share on other sites
Basiror, I thanks for the neat Triangle class, but I'm not quite ready for that yet.

Ok, I'm having a few problems getting VBO's with indexing to work.

class Model

bool load(const char *pFileName);
void render();

void createVBO();

unsigned short mVertexQty;
unsigned short mPolygonQty;

float *mVertex;
unsigned short *mPolygon;

unsigned mVBOCoordinatesID;
unsigned mVBOIndiciesID;

void Model::createVBO()
glGenBuffers(1, &mVBOCoordinatesID);
glGenBuffers(1, &mVBOIndiciesID);

glBindBuffer(GL_ARRAY_BUFFER, mVBOCoordinatesID);
glBufferData(GL_ARRAY_BUFFER, mVertexQty * 3 * sizeof(float), mVertex, GL_STREAM_DRAW);

glBufferData(GL_ELEMENT_ARRAY_BUFFER, mPolygonQty * 3 * sizeof(unsigned short), mPolygon, GL_STREAM_DRAW);

void Model::render()
glBindBuffer(GL_ARRAY_BUFFER, mVBOCoordinatesID);

glVertexPointer(3, GL_FLOAT, 0, 0);

glDrawElements(GL_TRIANGLES, mPolygonQty, GL_UNSIGNED_BYTE, 0);


glBindBuffer(GL_ARRAY_BUFFER, 0);

Am I on the right track?
It doesn't crash, but there is a blob of polygons that doesn't look like the model at all.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!