• Content count

  • Joined

  • Last visited

Community Reputation

112 Neutral

About 7zSuperiorCompressionServ

  • Rank
  1. [Quake 3 BSP Rendering] Questions on Bezier Patches

    Nevermind, I just figured it out. In case anyone else is stumped in the future, here's the solution:   // The amount of increments we need to make for each dimension, so we have the (potentially) shared points between patches int stepWidth = ( face->size[ 0 ] - 1 ) / 2; int stepHeight = ( face->size[ 1 ] - 1 ) / 2; int c = 0; for ( int i = 0; i < face->size[ 0 ]; i += stepWidth ) for ( int j = 0; j < face->size[ 1 ]; j += stepHeight ) patchRenderer.controlPoints[ c++ ] = &map->vertexes[ face->vertexOffset + j * face->size[ 0 ] + i ]; patchRenderer.Tesselate( subdivLevel ); patchRenderer.Render(); stepWidth and stepHeight specify how much their respective iterators need to be incremented by in order to find the correct control point for each patch. If there are multiple patches for a face, these iterations will land on the correct control points which are shared between patches.
  2. Hey, everyone. I've been working on a BSP Renderer, using information given from here and here.   All in all, they're pretty solid, and seem to be the most common used references for the .bsp format out there (which I am aware of). Anyway, I'm having some trouble wrapping my head around the following statement (from the second link, in the Faces section):   From what I gather, the author states that, since adjacent patches share a line of three vertices (with each patch having 9 control points each), we can use the (i, j) indices to find a total of 9 shared control points from the total number of vertices, which are then used for the triangulation. However, I'm not sure what the "2n/m, n/m" bits actually mean in the bold portion. Can someone clarify this? The current rendering implementation does the following for the patches:   else if ( face->type == FACE_TYPE_PATCH ) { const int subdivLevel = glm::min( 10, glm::abs( 10 - ( int )glm::distance( pass.view.origin, boundsCenter ) ) ); const int controlStep = face->numVertexes > 9 ? face->numVertexes / 9 : 1; int i = 0; for ( int j = 0; j < face->numVertexes; j += controlStep ) { patchRenderer.controlPoints[ i ] = &map->vertexes[ face->vertexOffset + j ]; i++; } patchRenderer.Tesselate( subdivLevel ); patchRenderer.Render(); } Note the controlStep: if we have more than 9 verts, we have to compensate and find 9 control points out of those vertices to send to the patchRenderer. As a result, we divide the number of vertices by 9, iterate through the loop, and use that quotient for the increment portion. If we only have 9 verts, then all the vertices can be used, so we just increment by 1. But I'm pretty certain that's the wrong way to go about it, since the control points are likely to be arbitrary and won't just be shared as multiples of 3 in the event that there is more than one region. tl;dr What is the proper way to attain the control points of the vertices, such that the amount of vertices is greater than 9,and given that the patchRenderer requires 9 control points exactly in order to tesselate the regions properly? I appreciate any help.
  3. OpenGL What to know in order to write a mesh parser?

    [quote name='Lauris Kaplinski' timestamp='1350670310' post='4991852'] What sorts of meshes are you trying to parse? Terrain/large static structures should be handled differently from character models and small props. You probably need octree for the first but not for the others. Start simple - write loader that generates single object/buffer. Once this works you may try octrees. [/quote] I haven't even gotten to that point yet. I guess I've been thinking character models as well as terrain and large static structures. I'll probably continue just working through these tutorials until I've at least grasped the basics, and then work towards mesh parsing from there. Just so I understand this correctly: The file format will contain the vertex/geometry/fragment data, as well as, probably, which primitives (likely triangles, I'd imagine) to use as a basis for rendering the mesh. Once I have the mesh data structure taken care of in my C++ code, I could simply parse the file, store the vertex data in c-style structures or something like glm::vec4, and then pass those to my renderer in a loop - correct? If that's the case, I assume that this wouldn't really take much math at all apart from a good understanding of the matrices, etc involved (unless, of course, I was using physics to do this, which will likely be a while from now). I appreciate the advice.
  4. I'd like to learn how to write a Mesh Parser, using features in OpenGL from 3.3 and up (my drivers should support 4.3, I'm currently running 4.2). The question is, I'm not sure what techniques/concepts would be good to understand first. I know how to write a binary search tree, if that helps - but maybe a QuadTree or even an OcTree would be more useful? I've been studying from [url="http://www.arcsynthesis.org/gltut/"]Learning Modern 3D Graphics Programming[/url], and am currently at the [i]Objects At Depth[/i] section. Also what would be good to know is what kind of file formats are most common in the OpenGL world. Should I use blender to draw meshes, export the files, and render them in OpenGL? Is it possible and pragmatic to write a single mesh parser which would just render whatever kind of mesh I throw at it? I'm aware of prewritten libraries such as GLMesh which help with this (though I'm not sure to what extent they do), but alas my goal is to learn so I'd like to avoid using something like that :3. Thanks.