Jump to content
  • Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

Mesh library

Sign in to follow this  


Say you wanted to write a generic mesh processing library, for polygon meshes. It would include a way of representing 'meshes' (with varying vertex formats, and varying placement of data, some in polys and some in vertices, etc) as well as a set of operations that work on them (merge meshes, weld vertices, quantization, stripification, subdivision, etc).

How would you design it?

Sign in to follow this  


Recommended Comments

How would you design it?

Like D3DX/ID3DXMesh, but with some added wrapper to manage the various transform params/textures/materials.

Maybe, if needed, a management/factory so as to eliminate duplicate textures/materials and optimize state changes.


Share this comment

Link to comment
I'm not talking meshes for rendering. I'm talking offline processing, stuff you might integrate into your asset pipeline.

How would you allow vertices to have an indeterminate composition, and get the mesh algorithms to operate on them regardless?

Share this comment

Link to comment
Well you could specify a particular mesh format that all the algorithms would use and then get the mesh object to convert between the actual format and the algorithm format and back again. You could also have along with the mesh data something specifying what data is actually available (e.g. a given mesh may have position and texture coordinate data). So if an algorithm operates on say texture coordinates it could query mesh as to whether it had texture coordinate data available and then get the data from the mesh in a particular format with the mesh object converting between the actual format and the format the algorithm would use.

Share this comment

Link to comment
I'd probably look at defining the mesh in mathematical terms that can be set to any detail level you needed. This way, you'd 'scale' the mesh and use it for the game model as well as being able to use the same mesh for high resolution light mapping and such. I wouldn't be able to code it though...

Share this comment

Link to comment
Another idea I've had is to represent the mesh as abstract objects known as 'mesh lumps'. Each mesh lump would represent some specific part of a mesh, e.g. a single mesh lump could represent a texture coordinate pair, it could also represent a parametric surface.

Each mesh lump could be queried for things like what type of thing it is representing and what operations can be performed on it. Operations would take one or more mesh lumps as parameters and i'd class operations into two types, ones that result in a new mesh lump or altering an existing one and predicate ones which return a bool value (to allow comparisons). A mesh lump might be defined something like this in C++

struct MeshLump
LumpType GetType( ) = 0;
bool SupportsOperation(OperationType Operation) = 0;
bool SuuportsPredictate(PredicateType Predicate) = 0;
MeshLump* PerformOperation(OperationType Operation, const std::list<MeshLump*>& Operands) = 0;
bool PerformPredicate(PredicateType Predicate, const std::list<MeshLump*>& Operands) = 0;

You could then query a mesh object for what different types of mesh lumps it supports and get a list of the ones you want. The algorithm would then perform whatever operations it likes on the mesh lumps and returns then back to the object.

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!