Jump to content
  • Advertisement
Sign in to follow this  
beebs1

Design Question

This topic is 3700 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hiya, I'm designing a renderer system for my small game, and I have a design question I was hoping someone could help with. I'd like to split the whole system into two parts: a Renderer, which maintains a rendering queue and makes calls to Direct3D - and also a Scene Graph, which spatially organises renderable objects (meshes, particle systems, etc). There are two ways I can see of doing this, and I'm not sure which is best? Firstly, I could keep the renderer low-level. It would only know about vertex/index buffers, textures, shaders and render states. This would then push the higher-level mesh and material classes, etc. up so they are part of the Scene Graph. Otherwise, I could have the renderer accept the higher-level meshes and materials to draw instead. Can anyone comment on which would be the best split? Thanks very much as always.

Share this post


Link to post
Share on other sites
Advertisement
I would not "have the renderer accept the higher-level meshes and materials to draw instead." Generally, if you've got an object which has the sole purpose of being 'operated on' by another object, then your encapsulation is broken. that said, I also wouldn't make the scenegraph system responsible for telling the renderer which states or vertex buffers to use.

I'd have two interfaces:

interface IMesh
{
void SetPosition(Vector3 pos);
void SetOrientation(Quaternion quat);
/* ... etc ... */
void Render(); // Might render immediately, might only queue up - no guarantees either way
}

interface IMeshProvider
{
IMesh CreateMesh(string meshResourceName);
}


which is all that the SceneGraph ever sees. Mind you, that's all it needs to see - it does not need to know about things like materials and mesh details, those are purely rendering characteristics. All it needs is one mesh factory and a bunch of mesh objects. The object that implements IMeshProvider might also be the actual renderer, but the SceneGraph doesn't care.

This leaves us with the relationship between IMesh objects and the renderer. All that mesh objects need from the renderer is a way to add strips of geometry to the render queue:


interface IRenderQueue
{
void AddToRenderQueue(RenderCommand cmd);
}

/* just to be clear about that RenderCommand structure... */
struct RenderCommand
{
Material mat;
VertexBuffer vb;
IndexBuffer ib;
RenderState[] renderStates;
PrimitiveType primType; // triangle list, triangle strip, etc
int startIndex;
int numPrimitives;
}


With those interfaces, you can create, say, a D3DRenderer object (that implements IMeshProvider and IRenderQueue interfaces), and a D3DMesh object (that implements IMesh). Aside from the points at which the objects are actually created, all inter-object access can be done through the tightly defined interfaces, which satisfies the Dependency Inversion Principle, keeps your objects loosely coupled, and ensures that no object knows about things outside of its domain of responsibility.

Share this post


Link to post
Share on other sites
Thanks very much [smile]

I'm going to go with your suggestion. I think I'll have to use interfaces for the Vertex and Index buffers as well, and then cast them to derived D3D-specific classes to get at the actual D3D vb/ib objects. Something like this:


interface IVertexBuffer;

class D3DVertexBuffer : public IVertexBuffer
{
// allow access to the D3D data
ID3D10Buffer* GetBuffer();
}

// And then to get at the data:
void D3DRenderQueue::AddToRenderQueue( RenderCommand cmd )
{
ID3D10Buffer* buffer = static_cast<D3DVertexBuffer>( cmd.vb )->GetBuffer();
}

Can you see any problems with this? I'm not sure if doing this static_cast for each buffer will slow things down too much, or maybe there's a better way?

Thanks again.

Share this post


Link to post
Share on other sites
Quote:
Original post by beebs1
Can you see any problems with this?
I think when you actually come to build this system you will realise that you don't need to get at the data, least of all to add it to the render queue; in fact you would be adding the entire RenderCommand to the render queue not its individual attributes.

Quote:
I'm not sure if doing this static_cast for each buffer will slow things down too much
A static_cast is a statically applied cast, meaning it happens at compile time; there is no run-time performance penalty for using it, unlike a dynamic_cast.

Quote:
or maybe there's a better way?
The AddToRenderQueue function doesn't need to access the underlying buffer directly. When it comes to rendering however you could just have a virtual bind function that will bind the buffer appropriately to the device/context - so still no need to get at the underlying ID3D10Buffer.

Share this post


Link to post
Share on other sites
Thanks dmatter. I think it may be too late to reply to this, but I'll give it a go.

Just to check my understanding - do you mean that the VertexBuffer interface should have a virtual Bind() method? I guess then the derived buffer would have to keep a pointer to the D3D device, so it can actually do this.

This just seems odd to me, as you would never need to call Bind() from outside the renderer - it seems like an implementation detail, so I'm not sure it should be part of the interface. That said, I can't think of any other way... have I misunderstood?

Thanks!

Share this post


Link to post
Share on other sites
You haven't misunderstood [smile].
When going for this sort of approach it is common to have lots of pointers to the device - this is how Ogre does it. I think it could be argued either way as whether binding is an implementation detail or a behavioural one, certainly it is often made part of the interface.
This is not the only design of course, down casting to reach a GetBuffer function would certainly work too.

The last time I went for abstracting at this level (vertex buffers, textures etc), I opted to use UIDs to represent things like buffers and have the renderer hold onto all the actual implementation details - using the UIDs to quickly index the correct buffer. This actually does keep binding as an implementation detail (there's no interface for it); you just feed the renderer UIDs and it handles everything with fewer virtual calls and no casting whatsoever. You can read more about it in this old thread.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!