Jump to content
  • Advertisement
Sign in to follow this  
beebs1

Interface Issue

This topic is 3677 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hiya, I'm just thinking about some designs for a renderer, and I have a quick question on interfaces I was hoping someone could help with... I've got two interfaces - IVertexBuffer, which internally contains a Direct3D vertex buffer, and IRenderer, which contains a D3D device. I'm not sure how the renderer can draw the buffers though, because in a call like this: void IRenderer::Draw( IVertexBuffer *buffer ); ...the renderer can't get at the actual D3D data. I guess this must be a common problem, can anyone recommend a solution? I've thought about the vertex buffers holding a pointer to the device (but not owning it), but this doesn't seem very clean. And it would get worse if I wanted to sort the buffers before drawing. Another idea would be to give each vertex buffer a unique ID. The renderer could then own the actual D3D buffers, and look them up in a table and draw then when requested. But this seems like a big overhead, as there might be hundreds (or more) buffers... Any help would be very much appreciated! [smile]

Share this post


Link to post
Share on other sites
Advertisement
For my relationships between Texture or VertexBuffer classes and my GraphicsDevice class, I find this a very worthwhile and justifiable use of the friend keyword. GraphicsDevice is just a friend of Texture and VertexBuffer.

Purists might stomp on this idea, but to be honest the classes are so tightly coupled logically anyway that it has never caused major problems.

Given that the alternatives that spring to my mind are at least as complicated as the ones you mention above (or as yucky as exposing parts of the VertexBuffer to a public interface that only GraphicsDevice needs), to me friend is the way to go.

Unrelated to your question but a couple of nitpicks:


void IRenderer::Draw( IVertexBuffer *buffer );


Would you not be better passing the IVertexBuffer by reference? Unless you sometimes need to pass NULL in for some reason, in which case ignore me.

Also, since your classes contain data members, they aren't really interfaces, so the 'I' prefix is a bit misleading. DirectX classes that start with 'I' are COM interfaces if my memory is serving correct facts.

Share this post


Link to post
Share on other sites
Thanks very much for your reply.

Sorry, I really didn't explain very well! IVertexBuffer and IRenderer are actually pure-virtual interfaces - it is the classes which inherit from these which contain the data members (D3D device, buffers).

So I don't think I can use friend, as the interfaces themselves don't contain anything. Which leads me back to the problem - when IRenderer is passed an IVertexBuffer, it can't get at the actual D3D vertex buffer to draw it.

Quote:
Original post by EasilyConfused

void IRenderer::Draw( IVertexBuffer *buffer );


Would you not be better passing the IVertexBuffer by reference? Unless you sometimes need to pass NULL in for some reason, in which case ignore me.


Yes, that would be better. Thanks!

Perhaps this is just a bad design... I could go the other way, and have the vertex buffer know how to render itself. But I don't particularly like this idea - I think it should just contain data. But I'm open to any criticism or suggestions!

Otherwise, I could have the renderer 'know' about higher renderable objects, such as meshes. Has anyone done this, can they comment?

Thank you.

Share this post


Link to post
Share on other sites
Do you really, really have a good reason for the interface? Runtime determination of your underlying render API is generally the only reason people have for this, and it's usually not a very good one. It offers limited benefits, more than doubles your implementation effort, and engenders rather brittle systems (such as you're discovering now) with suboptimal abstraction. I would strongly recommend against bothering.

That said, the common approach to ease this kind of runtime dispatch mechanism is to abstract at a higher level -- that is, you shouldn't be concerned about vertex buffers and index buffers, but rather more general renderable packages of data. Those packages of data (which typically consist of the geometry, associated textures, shaders, etc) can be implemented more opaquely. For example your 'buffers' of index or vertex data can simply wrap some kind of simple handle (like an int) which is translated by an internal, API-specific system to an appropriate representation that is consumable by the renderer for the API.

You're correct in that the renderer should do the rendering, not the buffers (buffers are too low-level for that) or the 'renderable' class. But in order to achieve this elegantly, without downcast hacks and other brittle ugliness, you're going to have to have a pretty robust internal system. Logically you will need to keep the API-specific data, such as D3D vertex buffers and such, in a bookkeeping cache that is looked up by the aforementioned ID, or something, from within the renderer itself (not necessarily as actual members, but in a cache that itself is an actual member, or something).

The overhead and complex is nontrivial, so I'll urge you again to reconsider you overall plan. Why do you feel you need runtime determination of the render API?

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
Do you really, really have a good reason for the interface?


No, I don't need to determine the API at runtime. Thanks for your post - I guess I've just gone down a bad path here.

My reason for all this is to try to design a simple rendering system which supports both DirectX and OpenGL. So from what you're saying, should I be abstracting all this at a higher level? For example, having the renderer consume meshes, materials, etc rather than vertex/index buffers and textures?

Thanks - Please let me know if I've misunderstood! [smile]

Share this post


Link to post
Share on other sites
Well, you will most likely need to give your renderer a concept of vertex buffer and index buffer at some level. However, in general you what to keep the part of the abstraction that must answer the "D3D or OpenGL?" question as high level as possible, for at least a couple of reasons:

  • It helps keep the code paths that are impacted by the swap to a minimum via a more-centralized use of API-specifics.

  • When every interface in your renderer has an IWhatever and a D3DWhatever and an OpenGLWhatever, then you quickly run into the problem that every IWhatever must have a D3DWhatever and an OpenGLWhatver -- e.g., if a specific API has a concept that does not translate well, you have to kludge it. This usually results in a least-common-denominator system.


API abstraction is usually pointless beyond the academic value, but runtime abstraction is particularly so.

Share this post


Link to post
Share on other sites
OK, thanks. I think I've got that now.

A last question - surely you still have to abstract the rendering API even if you're only supporting one? At least, that's how I've always done (to some degree) before.

I wouldn't want my 3D engine/scene graph to make Direct3D calls directly (no pun intended).

Share this post


Link to post
Share on other sites
Sure, but the abstraction of your API from their API is one kind of the abstraction, and the abstraction of your API from both their APIs is another, slightly different abstraction. The former is more pleasant to develop, in general.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!