Hi! I was wondering what is the best way to store texture coordinates? I read in several tutorials and they say that making data structures for vertices and polygons makes it easier to program, and they recommend putting the x,y,z,u,v coordinates in the vertex structure.
However, it seems to be giving some problems in the texture alignment. For instance, just say I have a texture saying "HELLO" going from left to right and want it textured on all 6 surfaces of a cube. Now if I put the U,V coordinates in the vertex data structure for the front face as
(0,1) = top left (1,1) = top right
(0,0) = bottom left (1,0) = bottom right
The word "HELLO" would be properly shown in the front face. But how do I make it make it show properly as well on the side faces (just say the face on the left), since the left vertices of the front face are now the right vertices of the left face? If I put these for the left faces...
(-1,1) = top left (0,1) = top right
(-1,0) = bottom left (0,0) = bottom right
Then the face on the top will start having problems since the top left vertex (which is the same vertex as the top left vertex of the left face) and bottom left vertex (which is the same vertex as the top right of the left face) of the top surface as not aligned in the same U coordinate. So I''m wondering if anyone has any ideas on how to handle this situation... Do I separate the U,V coordinates from the vertex data structure?
(Hope my explanation can be understood...)
yes, that is how i deal with it. i have a vertex list with just the positions and normals, then i have a poly list, each poly has indices to the proper vertices and the uv coor (among other vertex poly specific things, like predermined color and its own normal). otherwise, you can make a texture with repeating HELLO''s and use 0,0 and .25,0 for front face, etc (i won''t go into dealing with the top and bottom faces here OR you can create several addressing modes (ways to interpret tex coords) like in d3d (d3d supports 4 diff modes, i think) and do 0,0 1,0 for front, 1,0 and 2,0 for the next face, finally you have 3,0 and 0,0 for the last face. you then set the addressing mode and in your engine interpret in a special way. in this case, you do wrap around: texcoor % 2, (1 is a special case), so 0 is 0, 1 is 1, 2 is 0, 3 is 1. well, that''s a brainwave, hope it starts a brainstorm. later, Alex
Hmm... if you think about it, if you use that 0 = 0, 1 = 1, 2 = 0, 3 = 1 addressing method, there''ll be a face where there is a (1,0) on the left vertices and (0,0) on the right vertices, causing the image to be flipped as well...
I was thinking of more on the OpenGL front, cause in OpenGL, there''s a command where it can take data from an array so that they won''t need to draw shared vertices more than once. I''m wondering how do they handle that in OpenGL engines like Quake. Anyone knows?
well, i don''t know anything about ogl, but in d3d you handle that by using triangle strips, v1,v2,v3,v4 will make two triangles, v1,v2,v3 and v3,v2,v4. a whole cube can be sent out as a triangle strip. in d3d i would use a proper tex addressing mode (there are those that flip the image and others that don''t, i leave the alg to u), there must be something like that in ogl. sorry i can''t be of any more help.
You should store the texture coordinates by poly, not by vertex.
It''s fine to store the info by vertex if you repeat the vertex (ie v is the top left corner of the front face v is the top right corner of the left face. In actuality v = v, but the program doesn''t know that.)
Since you''re sharing the vertexes though it doesn''t make sense to do that. It''ll cause problems when you''re skinning any sophisticated mesh.
Yes, OpenGL does support triangle strips, but I''m trying to make the class being able to support all types of objects.
I''ve figured out some method to handle this problem, but have yet to implement it to see if it works fine. What I''m planning to do is to have pointers to texture coordinates in the vertex data structure as well as the polygon data structure. That way, for things like the boxes, you can use the texture coordinates from the polygon structure. For complicated meshes where each vertex will only have 1 texture coordinate, you can switch to the vertex''s texture coordinates. Any comments on this method? I have yet to try it out to see if it works fine... Storing pointers only won''t cause much performance drop, would it?