How to design an engine using Direct3D

Started by
5 comments, last by circlesoft 19 years, 5 months ago
Does anyone have any tips about how to design the structure of a game using Direct3D? I'm thinking something like this: - You have a main Application class to handle the window and tie everything together, CApplication - You have a Renderer class, which holds all the Direct3D stuff, CRenderer - You have a world class which holds all the game objects, CWorld What do you do in your main Render function? Is it:

void CApplication::Render()
{
    m_pWorld->Render( m_pRenderer );
}




or

void CApplication::Render()
{
    m_pRenderer->RenderWorld( m_pWorld );
}



i.e. do you pass the renderer to the world or the world to the renderer? Secondly, how do you manage your vertex buffers? All the documentation I've read says that the most efficient way is to use static vertex buffers and large numbers of triangles per buffer and as few buffers as possible (instead of large numbers of buffers with few triangles each). But I don't think static vertex buffers are very practical for games... are they? You have a lot more objects in the world than you do at any one time on the screen so it's impractical to store a vertex buffer for each object. And with models if they're animated then all the vertices change position constantly anyway. I'm thinking what I'll probably do is: on initialisation, create several vertex buffers with a fixed number of vertices (say, 100). I'll have one vertex buffer for each type of FVF that I use. Then my CRenderer object will have functions like: DrawTriangleUsingFVFType1( point A, point B, point C ); DrawTriangleUsingFVFType2( point A, point B, point C ); ...etc... These will add triangles to the buffers. When the number of triangles has reached 100 the buffer will be unlocked and renderered. Also any state change or call to 'EndScene' will trigger them to be renderered. Is this how you'd do it, or is there a better way?
Advertisement
My "engine" is based upon the old DirectX framework (they've completely changed it just recently). I have two main classes... Enumeration and Application. Application handles a lot of stuff.. from window creation to setting up d3d to keeping track of timing & rendering. To actually use the class, you just create a derived class and take advantage of the virtual functions (render, update, reset, etc) in the parent class. Enumeration grabs all of the available display modes based upon the requirements of the current application.

This is basically the "bare bones". I've never really done anything large scale.. I assume this type of set up would work well for the majority of novice programs and basic games. It's supported my needs well.

It may be wise to create a simple resource manager for regular textures. I try and use the STL every chance I get.. I've found hash_sets work pretty well when dealing with textures (comparing the hashes of strings is much faster than comparing the actual strings themselves, even with the hash function included, from my testing).

Now, for Vertex Buffers, the SDK documentation contains a lot of useful information (particularly under the performance section). As always, use fewer buffers with more triangles than the other way around. Use dynamic buffers with the appropriate flags set if you need to update a lot of triangles every frame (if you are just going to draw a few triangles, using Draw[Indexed]PrimitiveUP won't condemn you). If you are going to be drawings lots of quads, look into ID3DXSprite instead of trying to implement your own Sprite system. Also, ID3DXFont is a good option for text as well. It may be a good practice to try and make your own D3DX objects at some point (your own sprite or font system), but the D3DX options will most likely always be the fastest option (with equal flexibility).

When adding vertices to your buffers, try to group them by texture and render states to capitalize on your Draw[Indexed]Primitive calls and to avoid the expensive calls to SetRenderState and SetTexture.

This probably hasn't answered all of your questions, but I'm sure someone else will come along. Be sure to check out the forums FAQ's and the SDK. Also, remember that both Radeon and Nvidia have developer resources for DirectX independant of the respective companys' cards.
One area that you really have to plan for is the material system and material management (Fuzztrek touched on this a bit). Basically, a "material" can be defined as containing the following properties:

- Vertex and Pixel Shader
- Shader Constants
- Texture Sets

Each Material can have multiple texture sets, because you may want to use the same material, just with different textures.

If you really want to get in-depth, you could also add physics properties and sounds to your material. This way, the material automatically contains all physical properties (not just the graphical ones). It would then be easier to have objects demonstrate behavior, based on their composition (ie the object's strength, it's sound when walked on, ect...)

In one of my projects, I use a very modular, pluggable material system. Each material is contained in it's own separate DLL. All of the requested materials have their DLL's dynamically linked at runtime. My IMaterial interface is what all materials inherit from. It is kinda laid out like this:

public class IMaterial
{
public:
virtual HRESULT EnterMaterial() = 0;
virtual HRESULT ExitMaterial() = 0;
virtual HRESULT SetupForEntity( IEntity* entityIn ) = 0;
};

EnterShader() is called when the Material is selected to be used for multiple batches of rendering. It handles configuring the IDirect3DDevice9 to use the selected shader. On the other hand, ExitShader() is called after all batches have been completed, and handles any cleanup.

SetupForEntity() configures the shader for the current entity. This is where all of the custom shader constant are set. IMaterial casts the entity passed in to an acceptable class type and gets all of it's independent information from there.

For example, if we were using our Water material, SetupForEntity() would cast the entity to a CWater object, then extract all of it's properties (ie wave height, color, ect...).

Rendering does not occur inside of the Material itself. I've always liked to keep the material properties separate from the actually geometry data. Some people prefer to have the Material system render all of the geometry, but I let the Renderer interface handle that instead.

I have found that this setup is quite accomodating, in that it provides a general foundation for all materials, but still allows for specific actions to be performed. I recommend you take a look at this thread.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )
i havent started making my direct3d engine but in my java engine i pass the engine to the world.

I have a base class called "Actor" and everything which can be rendered in the world extends actor. I have an abstract method "render(Graphics g)" in actor and so all my actors have to implement there own render method. This way seams nicer to me.

My knowledge of 3d engine design doenst go much further than that tho.
I would personally have something like this:

void CApplication::GameLoop(){    m_pInput->MouseLoop();    m_pInput->KeyboardLoop();    // + some way of sending changes to the graphics class    // i.e. if the move forward key is pressed, there is some    // way of telling the graphics class to move forward    m_pGraphics->Render();}//then in graphicsvoid CGraphics::Render(){    m_pWorld.Render();}



Bear in mind that I haven't been writing 3D engines for very long
Hope this gives you some ideas though
Thanks for the replies,

I hadn't really thought too much about how to set up a materials system but I suppose I will have to now. I'm rather new to shaders - I've done a couple of tutorials on vertex shaders and that's it. Designing everything to minimise state changes seems unintuitive to me.

The project that I'm tentatively starting out on is a sort of Command and Conquer style game. At the moment I just have something that generates and displays a patch of terrain (as a heightmap). I want to be able to do stuff like water and shadow effects. Using dlls for each material seems a bit much for my purposes. But the EnterMaterial(), ExitMaterial(), SetupEntity() structure you described makes a lot of sense. How do you handle global constants, that would apply to all materials, like the projection and view matrices, or the position and direction of the sun? Do you set them from in your EnterMaterial() function, so they can be held in different registers in each shader, or do you do them at the beginning and say every shader has to hold them in the same registers? (Did that question make sense?)

Quote:
I have a base class called "Actor" and everything which can be rendered in the world extends actor. I have an abstract method "render(Graphics g)" in actor and so all my actors have to implement there own render method. This way seams nicer to me.

This is pretty much what I'm doing right now. My problem (thinking ahead) comes when you want to be able to render an actor with three different shaders, e.g. a soldier with different materials for his uniform, his skin and his gun. The solution I suppose is to use a tree structure where the soldier actor contains a list of child actors, say two arm actors, one gun, one torso, one face, one helmet etc. Then you have to maintain a gobal list of all the actors which you can then sort according to material types and so on. So each actor has two pointers to it, one from its parent actor and one from the global, sortable list.
Quote:Original post by foolish_mortal
How do you handle global constants, that would apply to all materials, like the projection and view matrices, or the position and direction of the sun? Do you set them from in your EnterMaterial() function, so they can be held in different registers in each shader, or do you do them at the beginning and say every shader has to hold them in the same registers? (Did that question make sense?)


Ah, okay I didn't include the complete definition of IMaterial. My version also has a SetShaderConstant*() series of functions that can set the internal constants from the outside. This is useful for all of the stuff that the entity itself doesn't contain (like you said - perspective and view matrix, lighting info, ect...).

I only used DLL's because I wanted to achieve the maximum amount of expandability and modularity. It's definetly not a necessary aspect of the system, if you aren't going for that.

State changes, like you said, are a little bit difficult. You *really* have to have your entities organized in some kind of tree, based on their material. This way, you minimize the amount of times calls need to be made to set the shader, constants, and textures. This is why EnterShader() and ExitShader() are actually separate from SetupForEntity - because you would ideally be drawing multiple entities in-between enter and exit.
Dustin Franklin ( circlesoft :: KBase :: Mystic GD :: ApolloNL )

This topic is closed to new replies.

Advertisement