Geometry management in 3d engines

Started by
13 comments, last by xen2 17 years, 10 months ago
Hello I'm developing a 3d engine (api-independent) and trying to figure out a good way of managing geometry (primitives composed of vertices stored in buffers). I want to make things easy for the client programmers and don't want them to have to worry about how and where the geometry data is stored. The idea is that the client programmer loads models, terrain, etc. and the engine takes care of the rest. If the programmer wants a simple textured sphere he simply creates a sphere object, sets appropriate effect and parameters, sets the camera and transforms, and voila! So where do I start? How do I implement this geometry management? Any ideas? Thanks Opwiz

www.marklightforunity.com | MarkLight: Markup Extension Framework for Unity

Advertisement
I think this is rather a question or a Material & Shader implementation
Have a look at this link to get started

http://www.gamedev.net/community/forums/topic.asp?topic_id=169710


My own system is pretty similar to the one described in the thread I posted above

I abstract the whole thing like this:

Textures: just hold the information that are need to use a certain texture for a surface

Material: describes material types, the number of layers a certain material has and which textures to use for which material

+ what effects that are required to render a surface with this material
+ what kind of effects are in relation with the material like bullet hits in a fps
.... I do this with callback functions that are setup at loadtime


then i have effect classes that describe different effect types
e.g.: effect-type: graphics , sound , particle .....

and finally I got a shader class that registers a dll that is capable of rendering a certain effect or play some sounds, spawn a particle effect or other things ....

The whole system can be implemented with less than 5000 lines of codes and is extremly extendable which is what i am aiming for
http://www.8ung.at/basiror/theironcross.html
Pretty sure the OP doesn't want info on a material system, I think he rather needs an abstract way to represent vertex/index data so that it is API agnostic. What are your needs, Opwiz? Does it need to be dynamic data that can be changed a lot, or are you just doing static data? If it's static you can start by having a Geometry class, with 3 member functions, Fill(Vertex* verts, int count), Upload() and Activate() (maybe Render() as well).

The fill function puts all the vertices in virtual memory, then Upload creates the hardware representation (vertex buffer, a vbo - etc) and activate it binds it however needed. How does that sound?

For this to work though, you need to a Direct3DGeometry and OpenGLGeometry class, and then you'll probably have a Geometry interface, and a factory pattern to create them.

Hope that helps, if you want some code samples, ask away! :)
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
I tried to create such a system. I ended up with two classes for index and vertex buffers (using the directX API, or VBO's on openGL). THen I have a Vertex Format class, wich determines how data in the buffers are used to draw things (from wich buffer to get the positions, normals). I think this is generic enough.

I think there that for static geometry, you could use a simple "geometry" class, holding all data. It could allow to share vertex buffers between different models, thus reducing state change overhead.
Quote:Original post by acid2
What are your needs, Opwiz? Does it need to be dynamic data that can be changed a lot, or are you just doing static data?

It is hard for me to foresee what I will need (making it tempting to just wrap all direct3d vertex buffer, vertex etc. classes). But as I see it I can get a long way with just static geometry.

Quote:
If it's static you can start by having a Geometry class, with 3 member functions, Fill(Vertex* verts, int count), Upload() and Activate() (maybe Render() as well).

Ok and I suppose I should assume the geometry is composed of triangles and that the vertex has a certain format? Or is it a good idea to support every possible vertex format there is, and other primitives?

Thanks for your reply.

www.marklightforunity.com | MarkLight: Markup Extension Framework for Unity

Quote:Original post by Basiror
I think this is rather a question or a Material & Shader implementation
Have a look at this link to get started

http://www.gamedev.net/community/forums/topic.asp?topic_id=169710

That is a nice thread but the material & shader implementation is another issue that I think I've got pretty much figured out. Thanks anyway.

www.marklightforunity.com | MarkLight: Markup Extension Framework for Unity

acid2,
"The fill function puts all the vertices in virtual memory"

does this mean that you alloc systemmemory and copy the verts in this buffer and hold this buffer as a cache for things like d3d's released-device handling?
or what is the reason that you do not only use the hardware-vertexbuffer?
Effectivly, I am uploading it all and storing it on hardware. However, if I was doing that on each "set-vertex"-esque call, that'd be lots of locks and unnecessarily slow. It's just a way of deferring it into one big write, that doesn't depend on the user doing it a certain way.

Regarding vertex format, I'd just go with something that can hold everything you will need for now. Yes, it's extra memory, but if your storing pos/normal/tang/binorm etc, you'll probably use them all the majority of the time anyway.
Ollie"It is better to ask some of the questions than to know all the answers." ~ James Thurber[ mdxinfo | An iridescent tentacle | Game design patterns ]
Right now I'm working on this part of my engine, I'll explain the solution I currently ended with (not final, subject to change).
First, let's resume what is needed :
Geometry source can be anything : file stream (chunks), network stream, procedural, can have a cache layer, LOD, etc... As a result, I implemented a GeometrySource virtual abstract class (only thing that I need to better integrate : LOD).
Secondly, RenderGeometry is the way it's stored for rendering. It's also virtual abstract class as geometry caching could be implemented here.
It could seem a little bit overengineered but it allow to very easily allow the user to add caching, use network stream, etc... in a very intuitive way.
Then the Effect (equivalent to Material in some other engine) have a fillVRAM-like function (cf Yann L. Material thread) that handle copying the required component from GeometrySource object to RenderGeometry object.

Usually a simple GeometrySource would provide vertex/index counts, primitive type, vertex/index formats, several OutputStream of vertice (one for each vertex buffer stream) and a OutputStream of indice (I use stream to prevent copying everything twice in memory, so Effect read directly from the stream and thus reduce buffering). A simple RenderGeometry allow to set vertex/index counts/formats, and give a way to access to VertexBuffer/IndexBuffer object (abstract virtual in case of crossplatform engine).

Some issues to resolve :
* Shared buffers. I have to track them as they will have to be shared for RenderGeometry object too.
* Make some tracking/heuristic to try to quickly calculate if data should be split in multiple vertex buffers or not (i.e. pass 1 use TEXCOORD, pass2 use COLOR, should it be splitted in two stream or not ?).

Do not hesitate to give your critics/opinions on that system, keeping in mind my objective is to let the user as much freedom as possible.
I haven't read through all the replies here, so excuse me if I repeat any info, but a while ago I implemented a generic vertex and index buffer for my engeine. The way they work is as such. First you determine a format, similar to how D3D does where you choose what data is represented in the vertex by bit flags. For instance, that would look something like:

#define MYFORMAT (VF_POSITION | VF_COLOR | VF_NORMAL)

Then, when you create the buffer you determine it's size as well. The vertex buffer is extended with an OpenGL or D3D implementation, so upon creation, the buffer will attempt to allocate video memory based on another flag you choose. Based on the components you require, the correct amount of memory is allocated in video and/or system memory (more on that later). The buffer can be allocated in one of three ways, dynamic, static, or local. Local allocation allocates in sys memory. Static is in vid memory, and dynamic allocates in both vid and local memory, so the memory can be accessed quickly by the CPU if you plan to read / change it frequently.

So, that takes care of the allocation of the buffer, now for the access. First off, all buffer access happens between a lock and unlock call, this is where the memory pointer is fetched from, or copied to the vid card if needed. My buffer makes use of two access methods. The first being a GetSafePointer method, where the user sends in the vertex format and the number of verts desired, and the app only sends back a pointer what you're asking for is with in the bounds of the buffer. This is the better faster way of doing things. The other way is that there is a fill method which is like fill( VF_COMPONENT, &CV3ctor3 data, unsigned int index ), which you can imagine is slower, but is neat because the app doesn't have to know what's in the buffer to fill it. If you try to fill a component that's not in the buffer, it ignores it.

The final part of the system is who determines what components go into any given buffer. This is determined by the material being applied to the geometry. If for instance, lighting isn't applied to the model, then normals will not be present in the buffer.

Hope that helps some.
Write more poetry.http://www.Me-Zine.org

This topic is closed to new replies.

Advertisement