SOLID and game (engine) design

Started by
22 comments, last by Juliean 11 years ago

Awwh, that makes me a little bit worried about my current use of this system. Right now I'd e.g. use one of my component systems to render like this:

1. What are the Begin() and End() calls actually doing?

Since there's no batching or sorting by effect going on, it makes no difference whether you call the Begin() and End() functions outside or inside the render. However if you later upgrade the engine to use rendering queues with batching, these Begin() and End() functions are going to be meaningless.

I'd argue that they are violating DI, and should be hidden away behind the renderer interface.

2. Do you need to set the colour every frame? Some of this state could be managed in the underlying effect implementation, so you only modify it when it actually needs to change.

3. Setting the worldview matrix in the effect might make sense in terms of the implementation details, but does it makes sense from a high level view? If not, it's probably a DI violation.

I think it would be cleaner to change your Render interface to something like:


void Renderer::Render(const CMesh&, CEffect&, CMatrix worldView);

It's then up to the renderer implementation to decide how to put all these things together into a render call, or render queue, or whatever it wants to do with them. Note that I'm also assuming a similar resource proxying mechanism for the mesh and effect classes. You don't actually need to resolve any of the lookups here at all, that can all be deferred to the Render() function's internals.

Yeah, thats the thing. Since I used a linear only-so-named-because-it-sounds-cool "hash", I actually required that map to actually access my textures again after loading them safely. So you'd suggest to create a hash of the texture name on loading - and then, when I later access that file (loading and accessing is a seperate task now, no more lazy loading, the one time in the texturemodule was a slip), simply create a new Texture-object, like this?

Yeah, thinking about it it may be better to stick to using a linear index for the locator. Either way, it's an implementation detail, and so long as it's all wrapped nicely behind the relevant interface, you can tweak the actual implementation details to better suit the real life usage patterns.

Well you can call me on premature optimization there too, since I assumed that it was better not to create a new texture handle (can we call it that anyway?) every time I accessed a texture by file name? You could also call me on having currently a stupid messed up rendering system that at least for the entities doesn't store the textures but accesses them per frame per entites (no, not even material sorting currently, yiekes :/) so that would be quite a bit memory creation and deletion per frame. That is, if you are suggesting me to handle it that way as in the code, aren't you?

I'd envisaged the texture class as being something almost completely engine agnostic, that could be stored in components. You wouldn't need to create them very often at all, and if you did, you could happily create them on the stack (public constructor that takes a pointer to the action handler - or something that can provide a pointer to the action handler) They're pretty lightweight, so it's not like they consume huge amounts of memory. In fact, they are probably going to consume less space than most texture names would, especially if you're using wide character strings.

Advertisement

1. What are the Begin() and End() calls actually doing?



Since there's no batching or sorting by effect going on, it makes no difference whether you call the Begin() and End() functions outside or inside the render. However if you later upgrade the engine to use rendering queues with batching, these Begin() and End() functions are going to be meaningless.



I'd argue that they are violating DI, and should be hidden away behind the renderer interface.

Begin() and End() do finally match the Begin() and End() calls of my D3DXEffect-framework wrapper, via the action handler and thus via the EffectModule. Yeah, I get the point, I started to experiment with a basic render queue, and I must say that those are really obsolete from that point of view.

2. Do you need to set the colour every frame? Some of this state could be managed in the underlying effect implementation, so you only modify it when it actually needs to change.

Well, I'll at least have to try to set it every frame, if you are just talking about the code that is calling SetColor here, since I have more renderer that all render with the same effect and use the texture slot with different textures. I don't see many possiblities to handle this like you said, except if you are talking about like in a render queue - I quess the color would be part of my StateGroup commands, so that might work out.

It's then up to the renderer implementation to decide how to put all these things together into a render call, or render queue, or whatever it wants to do with them. Note that I'm also assuming a similar resource proxying mechanism for the mesh and effect classes. You don't actually need to resolve any of the lookups here at all, that can all be deferred to the Render() function's internals.

Sounds like a good idea, but would you rather advice me to have the "renderer" class of your example be a seperate class, getting a reference to the texture, effect and mesh module, or is it supposed to be what my "Gfx3D" class does, simply in a different name? Also, the renderer isn't really supposed to be API agnostic, isn't he? Since only way to really resolve this lookups really is if I quarantee my renderer to hold a dedicated API-specifiy version of the resource modules (if it shouldn't be part of api specifiy gfx3d implementation, that is), at least I can't see anything else.

I'd envisaged the texture class as being something almost completely engine agnostic, that could be stored in components. You wouldn't need to create them very often at all, and if you did, you could happily create them on the stack (public constructor that takes a pointer to the action handler - or something that can provide a pointer to the action handler) They're pretty lightweight, so it's not like they consume huge amounts of memory. In fact, they are probably going to consume less space than most texture names would, especially if you're using wide character strings.

Yah, currently my component stores only texture file name. I see that this isn't very fitting since I'm now going for an advanced low level rendering implementation. So I see, I'll need only the locator for the current texture name and a pointer to the action handler, without the need to create and/or store a dynamic Texture object. Then again I'll at least need the one map you mentioned before, to further map file name to locator, but I think that'll be not a big deal.

Note that I'm also assuming a similar resource proxying mechanism for the mesh and effect classes.

Yup, you are assuming right. I didn't want to specifically implement it for mesh before I've heard your opinion about what I've done before, quess that wasn't a bad choice since its going to save me a lot of unnecessary work like the Begin and End functions of the Effect. So I quess I really should just expose the functionality that is totally needed via the Texture, Effect and Mesh classes and leave everything possible to the renderer using the actual API-specifiy objects, right?

EDIT: Also, I was wondering how I should handle this system for the meshes? Right now I'm only having low-level wrappes for dx objects like vertex buffers, index buffers, and vertex declarations. Should I have different allocators for those in CMesh or should I create some DX9-level abstraction class for meshes (therefore a combination of vertex buffer, index buffer and vertex declaration)?

I think I know what you mean, but I wasn't really talking about "accessing the internals" but more presenting the functionality to the outside. Like for my gfx class, if I were to split this up into seperate classes, one for handling textures, one for material, and one for meshes, etc... you would suggest it was considerably be a fault in my system if I wanted/needed to put these classes back into the gfx-class as member variables/pointer to them, or as suggested by de_mattT, via multiply interface inheritance?

No, of course not. Objects can't work together unless they have references to each other. The key is that each object should have no more references to other objects than it needs, and shouldn't expose access to more than it needs to.

Encapsulation is the whole of the law here. Pretty much everything in OO is about a different way of looking at encapsulation. Package things into encapsulated objects and use them where you need to. Can your textures and meshes live inside a graphics class? Sure, if you need a top level graphics class. I never have needed one. The question shouldn't become "then where would they go?" because the answer is "wherever you need". Form follows function.

Well, I'll at least have to try to set it every frame, if you are just talking about the code that is calling SetColor here, since I have more renderer that all render with the same effect and use the texture slot with different textures. I don't see many possiblities to handle this like you said, except if you are talking about like in a render queue - I quess the color would be part of my StateGroup commands, so that might work out.

There are a few approaches you could take. Alternatively the colour could be stored in the api agnostic Effect object, or in a low level, api specific wrapper that is stored in the EffectModule. When actually setting up the underlying d3deffect object, you just set the stored colour. Outside the api specific renderer code, you only set the colour once, or when it needs to change for a specific Effect instance.

Sounds like a good idea, but would you rather advice me to have the "renderer" class of your example be a seperate class, getting a reference to the texture, effect and mesh module, or is it supposed to be what my "Gfx3D" class does, simply in a different name? Also, the renderer isn't really supposed to be API agnostic, isn't he? Since only way to really resolve this lookups really is if I quarantee my renderer to hold a dedicated API-specifiy version of the resource modules (if it shouldn't be part of api specifiy gfx3d implementation, that is), at least I can't see anything else.

Yep, it's the same as your Gfx3D class, and the implementations will be api specific. (but the interface should not be, of course)

Yah, currently my component stores only texture file name. I see that this isn't very fitting since I'm now going for an advanced low level rendering implementation. So I see, I'll need only the locator for the current texture name and a pointer to the action handler, without the need to create and/or store a dynamic Texture object. Then again I'll at least need the one map you mentioned before, to further map file name to locator, but I think that'll be not a big deal.

But that's all the Texture really is. A pointer to an action handler, and a locator. Why not store the textures? It's clearer what they are and how to use them.

Yup, you are assuming right. I didn't want to specifically implement it for mesh before I've heard your opinion about what I've done before, quess that wasn't a bad choice since its going to save me a lot of unnecessary work like the Begin and End functions of the Effect. So I quess I really should just expose the functionality that is totally needed via the Texture, Effect and Mesh classes and leave everything possible to the renderer using the actual API-specifiy objects, right?

Yep, that sounds about right. You may also find that you actually don't need very many 'actions' on the textures after all, because once you've moved all the rendering logic behind the api specific interface, you will probably find that you only very rarely need to muck about with the internals of the texture from outside.

Also, I was wondering how I should handle this system for the meshes? Right now I'm only having low-level wrappes for dx objects like vertex buffers, index buffers, and vertex declarations. Should I have different allocators for those in CMesh or should I create some DX9-level abstraction class for meshes (therefore a combination of vertex buffer, index buffer and vertex declaration)?

Vertex buffers and index buffers are implementation details and should not be exposed outside the api specific code, not even indirectly in the form of locator objects. My feeling is that the IB and VB info should probably be wrapped up in an api specific mesh object, and have the locator point to that.

No, of course not. Objects can't work together unless they have references to each other. The key is that each object should have no more references to other objects than it needs, and shouldn't expose access to more than it needs to.



Encapsulation is the whole of the law here. Pretty much everything in OO is about a different way of looking at encapsulation. Package things into encapsulated objects and use them where you need to. Can your textures and meshes live inside a graphics class? Sure, if you need a top level graphics class. I never have needed one. The question shouldn't become "then where would they go?" because the answer is "wherever you need". Form follows function.

Ok, I think I'm starting to get it. Well, as for the graphics class, I think my main motivation was not having to pass that many object in places where I need the full graphics functionality, and have to store more objects than with just the Gfx3D class. But for once, I think introducing the render queue and cleaning up the whole high-level rendering interface, I might not even need to pass that many objects around anyway, and on the other hand, it doesn't even matter if I pass one or three parameters to the constructor and have one or three pointer stored in some classes at the gain of having a more cohesive interface. Its probably just "bad habit" of me, being still a bit lazy especially for that part.

There are a few approaches you could take. Alternatively the colour could be stored in the api agnostic Effect object, or in a low level, api specific wrapper that is stored in the EffectModule. When actually setting up the underlying d3deffect object, you just set the stored colour. Outside the api specific renderer code, you only set the colour once, or when it needs to change for a specific Effect instance.

Well I think the low level wrapper is not really an option, since obviously every effect is, at a low level, loaded once and shouldn't be dublicated in memory, I think. Sure I could set up so that I can have multiple of my D3DXEffect-instances for the same effect file, but that would just make things more complicated. I think the idea of storing it in the api agnostic wrapper was a good one, but since I'm about to fully implement my render queue, I might need to decide whether to store the color in the effect or not better yet in the material (I really think it rather belongs there after all, doesn't it?)

Yep, it's the same as your Gfx3D class, and the implementations will be api specific. (but the interface should not be, of course)

Ok, so I'm not totally off-track. Haha no, don't worry about the interface, I think I've got that part by now ;)

Yep, that sounds about right. You may also find that you actually don't need very many 'actions' on the textures after all, because once you've moved all the rendering logic behind the api specific interface, you will probably find that you only very rarely need to muck about with the internals of the texture from outside.

Texture isn't really the problem, since I'm currently just supporting file loaded textures and rendertargets, with no need of CPU-side manipulating them, I ended up with not a single action aside from getting the width. Its rather about meshes and effects that concerns me, but from what I can see, the render queue implementation (based on the frostbite articel here) will pretty much erase most of the necessary actions for the effect, too. Since everything is going to be a RenderState - command anyway, there is not that much of a point in having all those SetXYZ-methods, neigther for begin nor end. In fact I might even need less function than for the texture, effect is probably really going to be a pointer to the D3DEffect, and probably once I implemented proper shader permumations, it will eventually have some things to handle here too (I see the effect storing some API-agnostic cbuffer mapping information, etc... ).

Vertex buffers and index buffers are implementation details and should not be exposed outside the api specific code, not even indirectly in the form of locator objects. My feeling is that the IB and VB info should probably be wrapped up in an api specific mesh object, and have the locator point to that.

Good, thats what I wanted to do here too. I still think I got the wording wrong since it would basically be the same as for the D3DTexture storing in the D3DTextureModule-implementation, except it would be having two (or probably three with declaration) different maps for the objects created - if I didn't further encapsulate D3DVertexBuffer and D3DIndexBuffer in an D3DMesh-class, that is.

But that's all the Texture really is. A pointer to an action handler, and a locator. Why not store the textures? It's clearer what they are and how to use them.

Nah, there is no real "why". Its more a "getting things to work like they did before without taking in consideration possible optimizations" - plus, I tied the texture name in that component and the field for changing the textures name directly together in my editor GUI until now, so that would work more easy at first. Of course its not hard to change that, I'll just have too^^

Oh, one last thing, you guys already helped me out a lot, but I've got one question concering meshes again. What should I treat as a Mesh? Right now I have different classes for "normal" meshes, then one for line objects, etc... (since I havn't figured a way out to make vertex creation API agnostic, too - I didn't even have a good way to handle that before, without hard coded classes for different vertex formats, that is). So fitting my current implementation, it would be right to first of all make my Mesh-class support vertices with different format, specified probably at run-time (at least not hard-coded like now). But then, what about terrain, water, etc...? So once again its a rather broad question, but will there be a huge difference in such things that I might need specifiy render implementations:


renderer.RenderMesh(Mesh, Material, Camera);
renderer.RenderTerrain(Terrain, Material, Camera);

Or would I rather want to at least provide a "IRenderable" interface so that there is some connection between those two (and possibly more)?

This topic is closed to new replies.

Advertisement