• Advertisement

Archived

This topic is now archived and is closed to further replies.

Optimized vertex array usage

This topic is 5066 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I have been playing around with rendering of triangle models using vertex arrays. I first used DrawElements to render models with vertex positions and texture coords. In that case, I was rendering a terrain heightmap, with vertex position data in row order. I then used a indices array in order to select the vertexes in the correct order (wich was different from the order in witch it was stored). The problem I am faced with now is the following: If I want to render a model with vertex positions, normals and texture coordinates, I need to supply those three data chunks for each point. The problem is, that as far is I have understood it, that with vertex arrays, I always reference the same position in all the arrays, right? Suppose that a specific position is used by two triangles (and would hence share a vertex by indexding into a coordinate array), but that the two triangles would differ on the other data (they could belong to different smoothing groups, so that they would have different normals). If the vertex arrays are limited in the way i described, we would then not be able to share the position data, since we can only point out a specific combination of vertex, texCoord and normal data at a time. There is two easy solutions to this problem, the easiest way is to supply unique data set for each point, and render the whole thing with glDrawArrays. Another way would be to use the "brute-force" approach mentioned above, but let an algorithm evaluate the data an find parts where data identical data overlapped, and could generate index sets for glDrawElements(). The problem here is that it is highly likely that the set would degenerate into the original set fast. If we, for example, would use a single texture for the whole object, with no repeating or mirroring, we would get unique texture coords for each vertex, and we would have the full data set again... And we must ask ourself if the data we save offsets the cost of sending the indices array to GL... Getting back to the issue: Is there any way to use sharing for "full" datasets, or have I gotten it all wrong? In any event, have you read all this text, you deserve a coke... Regards Martin Persson, Hobby programmer, Sweden

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
Not sure I understand you question, but if you have a vertex with two different normals for 2 different poly, you have to duplicate the vertex.

Hope it helps,

Share this post


Link to post
Share on other sites
i think AP is correct. if i understand the question, i also think the OP is asking if he must duplicate, say, the vertex data for a point if there is more than one normal for that point. i.e. when the point is used in triangle #1, it has normal A, but when used in triangle #2, it has normal B.

i am also wondering about this problem, but to make matters worse i have normals, textures, *and* color values for each vertex. in this case, an index buffer approach will almost certainly result in the same amount of data (not including the index buffer itself) for accurately storing each point of data.

in database-speak, the primary key for a vertex table is not just the 3 spatial coordinates, but also the 3 normal coordinates, the 2 (or more) texture coordinates, and the 3/4 color values.

the naive solution to this would be for a graphics API to allow an index buffer for each array (position, normal, texture, and color arrays). dunno if this would be overkill tho.

Share this post


Link to post
Share on other sites
If a vertex has different indices for any two attributes, you need to duplicate it. This actually isn''t as inefficient since you think since the extra bandwidth for the vertices that get duplicated (generally not that many) is far less than quadrupling index bandwidth for most models.

Share this post


Link to post
Share on other sites
Reply to GameCat:

Yes, you always save quite a bit of data with indices, the issue is that for the models I use, most of the texture coordinates are unique (one texture for the whole mesh), with little reuse... Doesn''t this degenerate the performance back to a full set quite fast? I also have smoothing groups included, further reducing data reuse, since shared vertices (with faces with different SG) needs different normals as well.

Well, I''ll play around some, and see what performance I get with the different methods.

Thanks for your time, good to know I at least understood the issue correctly.

Share this post


Link to post
Share on other sites
Duplicate as soon as one coord differs, and use interleaved arrays. That is typical vertex structures with position, texture, normal (and color?) coords. This should bring more speed than separate arrays.

[edited by - Charles B on June 10, 2004 6:21:34 PM]

Share this post


Link to post
Share on other sites
yes, GPUs prefer pulling everything from one stream, infact there is even a certain order you would put them in to make sure the gpu can access it correctly (i think its u,v coord pairs first and x,y,z position last, but i''ll have to check), also try to vertex data sizes at multiples of 32bytes, thats the size of the data the AGP bus sends in one block, so fitting data on that boundry helps with data fetches.

hmmm gonna go watch the ATI/NV vid on OGL optermising now to get a refresh on the data format stuff..

Share this post


Link to post
Share on other sites

  • Advertisement