Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


#ActualJuliean

Posted 07 April 2013 - 02:48 AM

But on the whole, if you separate 2 objects and you find they end up needing access to the internals of the other object, then you've done something wrong. The point is to create smaller objects that communicate via a minimal interface. The information they yield to the outside world should be on a need-to-know basis - that's the foundation of proper encapsulation and object-orientation more generally. So if you're worrying about how to expose the internals of your new object, you're thinking about it the wrong way.

 

I think I know what you mean, but I wasn't really talking about "accessing the internals" but more presenting the functionality to the outside. Like for my gfx class, if I were to split this up into seperate classes, one for handling textures, one for material, and one for meshes, etc... you would suggest it was considerably be a fault in my system if I wanted/needed to put these classes back into the gfx-class as member variables/pointer to them, or as suggested by de_mattT, via multiply interface inheritance?

 

 

However in many situations compilation dependencies won't be an issue. In these situations introducing pure virtual classes is often overkill and leads to unnecessary complexity; sticking to the idea that clients determine the interfaces of servers should produce good results.

 

Ah, now I see. It's not that there needs to be an "interface" - its just that my high level classes should dictate how the low level classes work, not vice versa - right? I was probably thinking about it more that there should not be any direct dependency whatsoever...

 

 

Of course there are situations where adding a pure virtual class can improve the clarity of the design, even if the motivation is not breaking compile time dependencies.

 

Well, I don't know really. After interfacing my whole high-level graphics system, I feel that my design has indeed improved, instead of lowered. I see it much easier now also to apply the other patterns of SOLID, like breaking up responsibilities. It just feels there is a lot of less thight coupling, and I don't have to worry about that the actual implementation of class A matters for class B - I have the interface, that dictates how class A must work so that B can use it, and thats about it. After all, isn't this part of what SOLID, especially DIP stands for? I quess there are better ways to achieve this with using explicit interfaces, like you said, but as for now, I think I'm going to stick with them wherever possible, they at least give me a good basis for writing less coupled code, once I've gotten the routine to the SOLID thing, I might step back and remove where they are unnecessary.

 

 

Check out the post by dmatter about half way down the page. I believe your example is also a violation of the Liskov Substitution Principle because implementations of ITexture (and similarly for IEffect) will not be interchangable. I've never tried to create a game / engine with switchable graphics APIs, it sounds hard smile.png. I think Iwould probably be in favor if phil_t's suggestion of compile-time switches.

 

Yes, you are right about that. On the one side, I can somewhat safely use static_cast as long as my gfx implementation takes care that no wrong texture is passed in. On the other hand, this does kill the whole purpose of me trying to make the system SOLID, since one should not rely on my gfx implemenatation for things to work nicely. Note however that this onyl applies to the texture - IEffect works without a problem. That is because no API specifiy information needs to be pulled out of the effect - begin starts, end ends. It doesn't matter whether I implement IEffect to use D3DXEFFECT or a custom vertex and pixel shader, or something completely different.

 

Thinking about it, wouldn't that do the trick? Instead of having a specifiy GetAPITexture-method for the actual implementation, have a "SetRenderTarget" function in ITexture? That would at least kill most cases where I need to cast ITexture, since in the constructor of the actual DX9Texture, I can pass the specifiy device (this goes without problem since on creation of the texture I need to have access to the device anyway and, though I'm not sure, I might already have passed the device for some functionality), and in DX9Texture::SetRenderTarget I can call m_lpDevice->SetRenderTarget(m_apiSpecifiyTexture). Would you consider this a better solution to the current approach? I'd still need to come up with a solution for the IEffect->SetTexture-approach, but partially this would solve my issues, I just don't know if this makes sense from a design standpoint.


#1Juliean

Posted 07 April 2013 - 02:45 AM

But on the whole, if you separate 2 objects and you find they end up needing access to the internals of the other object, then you've done something wrong. The point is to create smaller objects that communicate via a minimal interface. The information they yield to the outside world should be on a need-to-know basis - that's the foundation of proper encapsulation and object-orientation more generally. So if you're worrying about how to expose the internals of your new object, you're thinking about it the wrong way.

 

I think I know what you mean, but I wasn't really talking about "accessing the internals" but more presenting the functionality to the outside. Like for my gfx class, if I were to split this up into seperate classes, one for handling textures, one for material, and one for meshes, etc... you would suggest it was considerably be a fault in my system if I wanted/needed to put these classes back into the gfx-class as member variables/pointer to them, or as suggested by de_mattT, via multiply interface inheritance?

 

However in many situations compilation dependencies won't be an issue. In these situations introducing pure virtual classes is often overkill and leads to unnecessary complexity; sticking to the idea that clients determine the interfaces of servers should produce good results.

 

Ah, now I see. It's not that there needs to be an "interface" - its just that my high level classes should dictate how the low level classes work, not vice versa - right? I was probably thinking about it more that there should not be any direct dependency whatsoever...

 

Of course there are situations where adding a pure virtual class can improve the clarity of the design, even if the motivation is not breaking compile time dependencies.

 

Well, I don't know really. After interfacing my whole high-level graphics system, I feel that my design has indeed improved, instead of lowered. I see it much easier now also to apply the other patterns of SOLID, like breaking up responsibilities. It just feels there is a lot of less thight coupling, and I don't have to worry about that the actual implementation of class A matters for class B - I have the interface, that dictates how class A must work so that B can use it, and thats about it. After all, isn't this part of what SOLID, especially DIP stands for? I quess there are better ways to achieve this with using explicit interfaces, like you said, but as for now, I think I'm going to stick with them wherever possible, they at least give me a good basis for writing less coupled code, once I've gotten the routine to the SOLID thing, I might step back and remove where they are unnecessary.

 

Check out the post by dmatter about half way down the page. I believe your example is also a violation of the Liskov Substitution Principle because implementations of ITexture (and similarly for IEffect) will not be interchangable. I've never tried to create a game / engine with switchable graphics APIs, it sounds hard smile.png. I think Iwould probably be in favor if phil_t's suggestion of compile-time switches.

 

Yes, you are right about that. On the one side, I can somewhat safely use static_cast as long as my gfx implementation takes care that no wrong texture is passed in. On the other hand, this does kill the whole purpose of me trying to make the system SOLID, since one should not rely on my gfx implemenatation for things to work nicely. Note however that this onyl applies to the texture - IEffect works without a problem. That is because no API specifiy information needs to be pulled out of the effect - begin starts, end ends. It doesn't matter whether I implement IEffect to use D3DXEFFECT or a custom vertex and pixel shader, or something completely different.

 

Thinking about it, wouldn't that do the trick? Instead of having a specifiy GetAPITexture-method for the actual implementation, have a "SetRenderTarget" function in ITexture? That would at least kill most cases where I need to cast ITexture, since in the constructor of the actual DX9Texture, I can pass the specifiy device, and in DX9Texture::SetRenderTarget I can call m_lpDevice->SetRenderTarget(m_apiSpecifiyTexture). Would you consider this a better solution to the current approach? I'd still need to come up with a solution for the IEffect->SetTexture-approach, but partially this would solve my issues, I just don't know if this makes sense from a design standpoint.


PARTNERS