Jump to content

  • Log In with Google      Sign In   
  • Create Account

SOLID and game (engine) design


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
23 replies to this topic

#1 Juliean   GDNet+   -  Reputation: 2735

Like
2Likes
Like

Posted 04 April 2013 - 11:39 AM

Hello,

 

so as suggested to me in a former thread where I already asked something about writing a game engine solid, I've actually just carried on writing some basic graphics capabilities. Now I've got things running and actually came back to refactor these and hopefully finally learn those principles. I've read a lot of articles multiple times about the topic of S.O.L.I.D and about each idiom, but then again I best learn by doing, so I've tried. I still got some more questions and things I want to make sure before I overcomplicate things, especially things

 

- Single responsibility: I actually have a hard time destinquishing what a single responsibility is, that I should have each class perform. From what I've got its nothing as wide as "handling textures" and nothing as close as "acessing a texture", while I've even seen people suggest you should basically make a lot of classes that, due to their completely isolated task, do often not have more than one or two methods. On the other hand, I'd rather prefer classes that at least are something more than a namespace function and some data, so I'd rather go with that single responsibility is when "a class performs on the same set of data". Is this a correct case of this pattern? What would you suggest, and why? And is there some help for people who have a hard time trying to (conceptually) seperate concerns?

 

- Dependency inversion principly. For this, I've started actually implementing "interfaces" (say, classes with (almost -> we'll get to that later) only pure virtual functions), that I use to implement the actual buisness logic. I'm still unsure whether I'm doing it right, since most sample impementation where eigther targeted towards java or especially not game programming. So let's look at what I did with my graphics libary. Here how effect and texture worked before:

#pragma once
#include "D3DEffectPool.h"
#include "D3DEffect.h"
#include "Device.h"
#include "Texture.h"
#include <string>

class Effect
{
public:
    Effect(Device& device, D3DEffectPool& d3DEffectPool, const std::wstring& sFileName);
    virtual ~Effect(void);

    void SetTexture(LPCSTR lpName, const Texture* pTexture);
    void SetMatrix(LPCSTR lpName, const D3DXMATRIX& matrix);
    void SetFloat(LPCSTR lpName, float value);
    void SetFloat3(LPCSTR lpName, float first, float second, float third);
    void SetFloat4(LPCSTR lpName, float first, float second, float third, float fourth);
    void SetTechnique(LPCSTR lpName);

    void Begin(void) const;
    void BeginPass(void) const;
    void ApplyChanges(void) const;
    void End(void) const;
    void EndPass(void) const;

private:

    D3DEffect* m_pD3DEffect;
};
 
class Texture
{
public:
	Texture(D3DTexture* pTexture);
	virtual ~Texture(void);

	int GetWidth(void) const;
	int GetHeight(void) const;
	int GetRef(void) const;

	void AddRef(void);
	void Release(void);

	D3DTexture* GetD3DTexture(void) const;

private:

	D3DTexture* m_pTexture;

	int m_Ref;
};

So obviously both those classes would both provide the "interface" as well as the implementation. They would be used directly at any part of the respective graphics pipeline, therefore e.g. effect would depend on texture, and so on. Now to resolve that, I tried this:

´#pragma once
#include <d3dx9.h> //todo: remove, implement custom matrices!!!
#include "ITexture.h"
#include "IResource.h"

class IEffect : public IResource
{
public:

	virtual void SetTexture(LPCSTR lpName, const ITexture* pTexture) = 0;
	virtual void SetMatrix(LPCSTR lpName, const D3DXMATRIX& matrix) = 0;
	virtual void SetFloat(LPCSTR lpName, float value) = 0;
	virtual void SetFloat3(LPCSTR lpName, float first, float second, float third) = 0;
	virtual void SetFloat4(LPCSTR lpName, float first, float second, float third, float fourth) = 0;
	virtual void SetTechnique(LPCSTR lpName) = 0;

	virtual void Begin(void) const = 0;
	virtual void ApplyChanges(void) const = 0;
	virtual void End(void) const = 0;
};

 

#pragma once
#include "IResource.h"

class ITexture : public IResource
{
public:
    virtual int GetWidth(void) const = 0;
    virtual int GetHeight(void) const = 0;
};
#pragma once

class IResource
{
public:
	int GetRef(void) const { return m_numRef; }

	void AddRef(void) { m_numRef += 1; }
	virtual void Release(void) = 0;

protected:

	int m_numRef;
};

Now I have three interfaces, one implements the reference counting, the other two are those from whom I inherit my actual texture and effect classes from. Now my questions: Now obviously everywhere where I used to have a Texture or Effect object, I'd instead have a ITexture or IEffect pointer.

 

1. is this a correct approach at dependency inversion? I just want to be sure, so in case this is OK, a simple "yes, is how dependency inversion works", or other wise "no, it would rather go that way..." would be nice. Remember, we are talking about applying one/more programming idioms, its not about getting things working (for the few people who'd say "do whatever works").

 

2. In case this is a correct approach, when and where should I apply this? Should I really create an "interface" for each and every dependency I might have, if not, how do I destinquish between cases where it makes sense and where not?

 

3. What about shared code and member variables? I didn't find any common "policies" for c++, take a look for example at my IResource class, should I rather have interfaces shared data member like the "m_numRef" and implement some common logic (like the setter and getter here) or should I really depend on the actual implemented class(es) to handle everything?

 

4. Even though I only depend on the interfaces, I might need to determine the actual type of an class at some point. I've thought about it, but since everywhere I'd generally pass an interface class, I can't think of any solution that goes without having to cast the actual type out of something. Not to say that casting is bad, but I wonder if this is "good practice" too, or if there is an better option. Like here:


 

void D3DEffect::SetTexture(LPCSTR lpName, const ITexture* pTexture)
{
    //todo -> -> is evil, maybe overthink low level abstraction layer?
    if(!pTexture)
        m_pD3DEffect->SetTexture(lpName, nullptr);
    else
        m_pD3DEffect->SetTexture(lpName, static_cast<const Texture*>(pTexture)->GetD3DTexture());
}

This is my D3DEffect - implementation of the IEffect interface. As the pure virtual function takes an ITexture object, but the LPD3DXEFFECT needs a LPDIRECT3DTEXTURE9, I must cast, since I need to extract this texture which is part of the "class D3DTexture : public ITexture" implemenatation. Is this ok, or is there some better/cleaner way to achieve this? One potentialy issue is that one can pass an OpenGLTexture or XNATexture-implemenatition to the D3DEffect, which would throw an error - is this something I just have to deal with? Granted this is probably a very special case since you won't need this type destinction in most other applications, but still, how should I handle this?

 

5. As you see, IEffect still depends on ITexture. I've never found one word written about the dependency between interfaces. I think this is somewhat obvious but I still want to ask: this is normal and that kind of dependency we tolerate/want to achieve, right?

 

So this is it with my questions for now. An answer to any of those would be greatly apprechiated, I'm sure a lot of you guys have experience with those patterns. Like mentioned one time, please don't tell me "do whatever works for you", since I already have anything working, and this is my personal "lesson" for learing solid, so I really want to get it right this time, before faulty, half-a**ed over-complicated applying some not well-understood pattern to my engine that won't increase my code anyhow.


Edited by The King2, 04 April 2013 - 11:40 AM.


Sponsor:

#2 Juliean   GDNet+   -  Reputation: 2735

Like
0Likes
Like

Posted 04 April 2013 - 12:16 PM

I'm sure others will have lots to say, but I do have one question: why can't ITexture have a GetTexture method? Do you really need runtime support for multiple 3D API libraries? It can't just be a compile-time switches?


 

Multi-API support is a requirement for me, yes. As for compile time switches... as in compile-time switches using compiler defines? I'm not so sure, I think it would be kind of cool if I could switch between different APIs on runtime instead of having different binaries, but compile time switch should do, if there is no better option. So I would just have a GetTexture method in ITexture, or also the respective texture object?

 

On the other hand, wouldn't that break a part of the purpose of depenceny inversion? Since I'm now depending in changing the interface to achieve different business logic. Also someone would need to change the ITexture interface in order to implement some renderer not forseen by me. You suggest that this is ok in this case, right?



#3 phil_t   Crossbones+   -  Reputation: 4096

Like
2Likes
Like

Posted 04 April 2013 - 01:02 PM

I guess I'm saying that you haven't provided enough info to know if making things interfaces is the right design decision. What is your end goal? If you really do need runtime switching over different APIs, and things like the possibility for someone to develop a new pluggable implementation of ITexture, then sure - what you have is a good start.

 

I wouldn't cast the ITexture to a particular class type though. Instead, I would make ITexture have some method like GetNativeTexture(some-identifier-of-the-type-requested) so it has a chance to fail gracefully if the it is not of the right type. Or you could use RTTI and dynamic_cast the ITexture to an ID3DTexture which has a GetD3DTexture method. Both these would be better than the static_cast (again, if you really expect different ITexture implementations - which is the whole point of dependency inversion).

 

If you aren't sure if you need all this plug-and-play kind of stuff, and this is just for yourself, then I would just keep things simple and use classes like you were originally doing. It's easy enough to make the necessary abstractions when you actually need them.



#4 Juliean   GDNet+   -  Reputation: 2735

Like
0Likes
Like

Posted 04 April 2013 - 01:43 PM

I guess I'm saying that you haven't provided enough info to know if making things interfaces is the right design decision. What is your end goal? If you really do need runtime switching over different APIs, and things like the possibility for someone to develop a new pluggable implementation of ITexture, then sure - what you have is a good start.

 

Well I got to say, maybe I asked too specifically - though I brought that example, this wasn't just all about the graphics stuff - since I've read and heard a lot about it, I see SOLID as some very handy principles, that I want to learn - even if it means making things a little more complicated than they need to. In fact things like making code const correct, pass by reference instead of value where needed, not putting using namespace std etc... have provided a good source of motivation for me - simply putting out code that works in a way isn't really satisfying me anymore. Then again, I see where you are coming from. Obviously applying the right pattern at the wrong time isn't going to help much. But then again, if I have the choice of keeping things simply and less modular but probably never needing those modularities, and having a well written codebase which is probably more expandable and modular then needed while having spent some time more on developing it - I'd choose the latter. Just to clarify my motivation for all this.

 

As to if I am going to need it - I'm certainly sure I'm going to want to "release" my code some time when I've refactored it enough to make it useable for someone else than me. I think sticking to the SOLID principles is definately going to help me - and prevent eventually reaching a point where my code is so messed up that even I can't work with it any more. Like I said, there is relatively little concerning games and SOLID, in fact I haven't seen even a line of pseudo-code for any of game developement sub areas. That what I had in mind when asking those questions.

 

I wouldn't cast the ITexture to a particular class type though. Instead, I would make ITexture have some method like GetNativeTexture(some-identifier-of-the-type-requested) so it has a chance to fail gracefully if the it is not of the right type. Or you could use RTTI and dynamic_cast the ITexture to an ID3DTexture which has a GetD3DTexture method. Both these would be better than the static_cast (again, if you really expect different ITexture implementations - which is the whole point of dependency inversion).

 

Isn't RTTI and dynamic_cast comperably slow? Since those are performance sensitive areas, this would be important to know - I'm just assuming, I'm not sure if those actually cost much? As for your first method, this would definately be a good option - in fact this seems to be what I was looking for in this particular issue, thanks.

 

Still I'd be glad to hear some opinions on the other questions I've posted ;)



#5 de_mattT   Members   -  Reputation: 308

Like
1Likes
Like

Posted 04 April 2013 - 02:56 PM

Dependency Inversion Principle

2. In case this is a correct approach, when and where should I apply this? Should I really create an "interface" for each and every dependency I might have, if not, how do I destinquish between cases where it makes sense and where not?

 

'interface' means the portion of a class that is publicly visible.  The interface does not have to be a separate pure virtual class (interface in Java / c#).

 

The key idea behind the Dependency Inversion Principle is that the clients impose the interface on the servers, not the other way around. The interface is owned by the clients and is designed to meet their needs.

 

In an attempt to adhere to this principle I started introducing pure virtual classes everywhere, but I realised that they were adding unnecessary complexity. As long as the server's interface is imposed by the client it doesn't matter if the interface is defined separately by a pure virtual class or not.

 

I generally treat pure virtual classes exactly as I would treat any super class. The top level class of an inheritance hierarchy should contain as much common functionality as possible (Don't Repeat Yourself). The application of the DRY principle usually dictates the type of class that will sit at the top of my inheritance hierarchies.

 

In saying that, sometimes a pure virtual class makes sense, as a pure virtual class describes what an object does, and not what an object is.

 

These principles are tools to help you manage the complexity of your design. They are a guide only and applying them effectively comes with experience.

 

Matt


Edited by de_mattT, 04 April 2013 - 02:57 PM.


#6 de_mattT   Members   -  Reputation: 308

Like
1Likes
Like

Posted 04 April 2013 - 03:20 PM

Single Responsibility Principle

I'd rather go with that single responsibility is when "a class performs on the same set of data".

 

I find SRP difficult. I do use "a class performs on the same set of data" as an indication of the number of responsibilities a class has.

 

You can't always look at a class in isolation and determine how many responsibilities it has, because the number of responsibilities is often dependent on how the class is used by its clients.

 

It's a good idea to look at the clients that are using your class and to see how much of the class each client is using, and how they are using it. This may give you an idea of the responsibilities that the clients see the class as having.

 

Remember also that a class can have multiple responsibilities (particularly if they are all realised using the same data members), but the responsibilities can be separated by creating a pure virtual class for each responsibility which the class inherits. Clients can then use the pure virtual class in place of the original class so that they are only aware of the parts of the class that they use.

 

Matt



#7 Juliean   GDNet+   -  Reputation: 2735

Like
0Likes
Like

Posted 05 April 2013 - 05:41 AM



The key idea behind the Dependency Inversion Principle is that the clients impose the interface on the servers, not the other way around. The interface is owned by the clients and is designed to meet their needs.

 

In an attempt to adhere to this principle I started introducing pure virtual classes everywhere, but I realised that they were adding unnecessary complexity. As long as the server's interface is imposed by the client it doesn't matter if the interface is defined separately by a pure virtual class or not.

 

Can you give an example of how this would work without having a pure virtual interface of the client? Since once of he main principles of dependency inversion is that a high level components won't depend on low level components (like in my case, the gfx-classes not depending on the low-level graphics wrapper like I did before), I currently can't think of any other way to do so - without making the low level components depend on the high level that is, which would be an even bader idea. Hell, I'm kind of stuck in that thinking - now that I somewhat fully gasped the meaning and applyance of SOLID/DI, I can't stop thinking in interfaces and class seperations anymore. Surely I want to avoid unneccessary complexety. But as you said, you made that mistake once and learned from it, so heck, maybe that wouldn't be so bad for me after all, too.

 

I generally treat pure virtual classes exactly as I would treat any super class. The top level class of an inheritance hierarchy should contain as much common functionality as possible (Don't Repeat Yourself). The application of the DRY principle usually dictates the type of class that will sit at the top of my inheritance hierarchies.

 

So given that, and if I were to really need a pure virtual class as an interface in a place where it makes sense, but also have quite a lot of classes that inherit from it - I would inherit one base-class from this interface, write the common functionality in that base class and then derive the other classes from this? Like in this example (this time not counting if it actually makes sense to have the interface, lets pretend it does): I'd have an IRenderable interface, from this I'd derive a BaseRenderable which implements common functionality, and from this I'd further derive my actual renderables like Mesh, Terrain, ... , right?

 

These principles are tools to help you manage the complexity of your design. They are a guide only and applying them effectively comes with experience.

 

Rightfully so, I think there is no way to learn those without probably misusing them a few times, like it is most time. Gladely there are forums like gamedevs where one can asked people who already know better how to use such things.

 

I find SRP difficult. I do use "a class performs on the same set of data" as an indication of the number of responsibilities a class has.



You can't always look at a class in isolation and determine how many responsibilities it has, because the number of responsibilities is often dependent on how the class is used by its clients.



It's a good idea to look at the clients that are using your class and to see how much of the class each client is using, and how they are using it. This may give you an idea of the responsibilities that the clients see the class as having.

 

Ok, seems like I'm not off the track entirely. Then again, I've always heard that most "Manager" classes are bad or misused, like I did in the past. While they all performed on the same set of data, they implied unnecessary restrictions. But I think with more practice I should be able to determine that better.

 

Remember also that a class can have multiple responsibilities (particularly if they are all realised using the same data members), but the responsibilities can be separated by creating a pure virtual class for each responsibility which the class inherits. Clients can then use the pure virtual class in place of the original class so that they are only aware of the parts of the class that they use.

 

Ah, so you are suggestion multiple inheritance here, right? I didn't think of that before, I never even used it before, but from what I read (did a quick google search) this seems to be a good case for multiple inheritance, from interfaces that is. Just to make sure I got it right, I would have the clients that need just a portion of the functionality of the class pass a pointer to one of its multiple inherited interfaces, right? Again one example, so I would inherit my Gfx3D-class from ITextureModule, IMeshModule, IEffectModule, and have it implement all their pure virtual functions for loading / storing stuff etc. Then whenever one class needs to e.g. create some textures, I'd pass my Gfx3D-object to it, as ITextureModule-pointer, right? What would I do if a class needs multiple functionality, but still not all? Say mesh and effect, I'd pass the same Gfx3D-object twice, but once via the IMeshModule, and one time via the IEffectModule, right?

 

Thanks a lot so far, I'd be glad if you could further eleborate on the questions I had in response to you...



#8 Kylotan   Moderators   -  Reputation: 3338

Like
1Likes
Like

Posted 05 April 2013 - 07:06 AM

I personally find that the SRP is not something you plan in advance, but something you watch for as you develop. Say you have a simple class, and you add more methods and properties to the class as you go along. Each time you add something, consider whether it makes sense for some of the functionality to be split out into a new class - and if so, do it. If the functionality can stand alone, it probably should stand alone. And even if it is only used inside another class, if it is a self-contained package of functionality, it should be encapsulated as such.

 

On the other hand, I'd rather prefer classes that at least are something more than a namespace function and some data, so I'd rather go with that single responsibility is when "a class performs on the same set of data". Is this a correct case of this pattern?

 

Why do you care how many methods a class has?

 

And if it were possible to say that acting on the same set of data is enough to call it a single responsibility, then you could put your whole program into one class. After all, it all acts on one set of data - namely, all the data that the program uses.



#9 Sandman   Moderators   -  Reputation: 2136

Like
3Likes
Like

Posted 05 April 2013 - 07:58 AM

#pragma once

class IResource
{
public:
int GetRef(void) const { return m_numRef; }

void AddRef(void) { m_numRef += 1; }
virtual void Release(void) = 0;

protected:

int m_numRef;
};

 

IResource has absolutely nothing to do with resources. It's a reference counter. Call it IReferenceCounter. Also, never expose protected data, it's as much a no-no as public data. Your reference counting implementation as it stands is going to be extremely tightly coupled to absolutely everything that touches it.

 

Far better to remove it entirely and use RAII and smart pointers as appropriate to automate that stuff. Remember 'has a' vs. 'is a' relationships. TextureResources and EffectResources might HAVE ref counted components, but they don't necessarily need to BE ref counted themselves, so long as what they contain is managed properly. Do this and you can dispose of that coupling entirely.


Edited by Sandman, 05 April 2013 - 08:22 AM.


#10 Juliean   GDNet+   -  Reputation: 2735

Like
0Likes
Like

Posted 05 April 2013 - 08:36 AM

I personally find that the SRP is not something you plan in advance, but something you watch for as you develop. Say you have a simple class, and you add more methods and properties to the class as you go along. Each time you add something, consider whether it makes sense for some of the functionality to be split out into a new class - and if so, do it. If the functionality can stand alone, it probably should stand alone. And even if it is only used inside another class, if it is a self-contained package of functionality, it should be encapsulated as such.

 

That sounds reasonable. I still need a starting point - refactoring my graphics and other classes should do pretty well IMHO. Then again, I'm a little confused. Say I have a class that just gets more and more methods, and I decided to split its responsibilities - how should I tie it up back together? This is the thing that makes me most unsure. I can't even give you a practiacal example, but lets say I have a class that has say 25 public methods. Now I decide that 10 of this methods are their own responsibility, and should become their own class. How would I implement the functionality back to the class where it came from? I quess this is very case specifiy, but what is the prefered choice? Should I preferably:

 

- Eigther have an instance of this class as a member of the class I seperated it from, exposing it directly via a getter or public access or

- Have it as member of the old class, but expose its functionalitiy via seperate functions that match those of the new class or

- Have the new class completeley seperated from the old one

 

My fear is that if I seperate it too much, I might end up passing 100s of objects to each class that needs e.g. physics functionalitiy, but again, there might be a good reason to do so...?

 



Why do you care how many methods a class has?

 

I do not care generally, but... Well if there is too much methods, the class might get too complicated, but if there are a whole lot of classes with 1-2 methods and little to no data, I'm afraid that my whole design might get too complicated...

And if it were possible to say that acting on the same set of data is enough to call it a single responsibility, then you could put your whole program into one class. After all, it all acts on one set of data - namely, all the data that the program uses.

 

Thats a good one. Then again, you might add "acting on the same set of data [that can not sensically be seperated any further]", which would split our game in graphics/physics, graphics in 2d/3d, and so on...

IResource has absolutely nothing to do with resources. It's a reference counter. Call it IReferenceCounter.



Or better yet, remove it entirely and use RAII and smart pointers as appropriate to automate that stuff. Remember 'has a' vs. 'is a' relationships. TextureResources and EffectResources might HAVE ref counted components, but they don't necessarily need to BE ref counted themselves, so long as what they contain is managed properly.

 

Yeah, I figured that out pretty quickly too. Had this included in my old texture class, now I made it as an interface, but I figured I wouldn't need it anyway, with my new resource management. I'll definately give smart pointers a try, thanks!



#11 Kylotan   Moderators   -  Reputation: 3338

Like
3Likes
Like

Posted 05 April 2013 - 11:07 AM

Say I have a class that just gets more and more methods, and I decided to split its responsibilities - how should I tie it up back together? This is the thing that makes me most unsure. I can't even give you a practiacal example

 

An abstract problem can only have an equally abstract answer.

 

But on the whole, if you separate 2 objects and you find they end up needing access to the internals of the other object, then you've done something wrong. The point is to create smaller objects that communicate via a minimal interface. The information they yield to the outside world should be on a need-to-know basis - that's the foundation of proper encapsulation and object-orientation more generally. So if you're worrying about how to expose the internals of your new object, you're thinking about it the wrong way.

 

I think one of the most important skills a programmer can develop is to see a large lump of vaguely related functionality and learn how to split it up so that there's a minimal interface between them. Sometimes you can do this by trial and error. Cut and paste the relevant functions from one class and put them into a new class. Hit compile. Compiler will tell you that you're missing several member variables and maybe other functions. So think about whether they can or should be moved to the new class too. You usually find that most stuff fits neatly into one side or the other, if you picked a good dividing line. And the bits that don't fit, you can work around either by passing in the data when the new object is created, or by having the new object query the old object, or occasionally refactoring the shared information out to a 3rd object.


Edited by Kylotan, 05 April 2013 - 11:07 AM.


#12 de_mattT   Members   -  Reputation: 308

Like
0Likes
Like

Posted 06 April 2013 - 03:26 PM

Can you give an example of how this would work without having a pure virtual interface of the client? Since once of he main principles of dependency inversion is that a high level components won't depend on low level components (like in my case, the gfx-classes not depending on the low-level graphics wrapper like I did before), I currently can't think of any other way to do so - without making the low level components depend on the high level that is, which would be an even bader idea. Hell, I'm kind of stuck in that thinking - now that I somewhat fully gasped the meaning and applyance of SOLID/DI, I can't stop thinking in interfaces and class seperations anymore. Surely I want to avoid unneccessary complexety. But as you said, you made that mistake once and learned from it, so heck, maybe that wouldn't be so bad for me after all, too.

 

Thinking about it my original response wasn't really about the DIP :)

 

If your motivation is breaking compile-time dependencies so you can compile your client code without your server code then the DIP could be applied using pure virtual classes.

 

However in many situations compilation dependencies won't be an issue. In these situations introducing pure virtual classes is often overkill and leads to unnecessary complexity; sticking to the idea that clients determine the interfaces of servers should produce good results.

 

Of course there are situations where adding a pure virtual class can improve the clarity of the design, even if the motivation is not breaking compile time dependencies.

 

So given that, and if I were to really need a pure virtual class as an interface in a place where it makes sense, but also have quite a lot of classes that inherit from it - I would inherit one base-class from this interface, write the common functionality in that base class and then derive the other classes from this? Like in this example (this time not counting if it actually makes sense to have the interface, lets pretend it does): I'd have an IRenderable interface, from this I'd derive a BaseRenderable which implements common functionality, and from this I'd further derive my actual renderables like Mesh, Terrain, ... , right?

 

If your motivation is breaking compile-time dependencies then I'd suggest that the decision of whether to have a pure virtual class at the top of the hierarchy depends on the volatility of BaseRenderable. If the common functionality in BaseRenderable is not volatile, merge BaseRenderable and IRenderable together.

 

If your motivation is not breaking compile-time dependencies, then I would suggest merging BaseRenderable and IRenderable together. There seems to be no value in having them separate.

 

Ah, so you are suggestion multiple inheritance here, right? I didn't think of that before, I never even used it before, but from what I read (did a quick google search) this seems to be a good case for multiple inheritance, from interfaces that is. Just to make sure I got it right, I would have the clients that need just a portion of the functionality of the class pass a pointer to one of its multiple inherited interfaces, right? Again one example, so I would inherit my Gfx3D-class from ITextureModule, IMeshModule, IEffectModule, and have it implement all their pure virtual functions for loading / storing stuff etc. Then whenever one class needs to e.g. create some textures, I'd pass my Gfx3D-object to it, as ITextureModule-pointer, right? What would I do if a class needs multiple functionality, but still not all? Say mesh and effect, I'd pass the same Gfx3D-object twice, but once via the IMeshModule, and one time via the IEffectModule, right?

 

Absolutely! This approach means that if you do manage to split Gfx3D-class in the future your client code will be unaffected.

 

Referring back to your original post:

 

4. Even though I only depend on the interfaces, I might need to determine the actual type of an class at some point. I've thought about it, but since everywhere I'd generally pass an interface class, I can't think of any solution that goes without having to cast the actual type out of something. Not to say that casting is bad, but I wonder if this is "good practice" too, or if there is an better option. Like here:

 

When I read this it brough back memories of a really good response to a topic I read on gamedev a while back. The topic was about GUIs, but I think it's still relevant and deffinetely worth a read. Here's the link:.

OO Design and Avoiding Dynamic Casts

 

Check out the post by dmatter about half way down the page. I believe your example is also a violation of the Liskov Substitution Principle because implementations of ITexture (and similarly for IEffect) will not be interchangable. I've never tried to create a game / engine with switchable graphics APIs, it sounds hard :). I think Iwould probably be in favor if phil_t's suggestion of compile-time switches.

 

Matt



#13 Juliean   GDNet+   -  Reputation: 2735

Like
0Likes
Like

Posted 07 April 2013 - 02:45 AM

But on the whole, if you separate 2 objects and you find they end up needing access to the internals of the other object, then you've done something wrong. The point is to create smaller objects that communicate via a minimal interface. The information they yield to the outside world should be on a need-to-know basis - that's the foundation of proper encapsulation and object-orientation more generally. So if you're worrying about how to expose the internals of your new object, you're thinking about it the wrong way.

 

I think I know what you mean, but I wasn't really talking about "accessing the internals" but more presenting the functionality to the outside. Like for my gfx class, if I were to split this up into seperate classes, one for handling textures, one for material, and one for meshes, etc... you would suggest it was considerably be a fault in my system if I wanted/needed to put these classes back into the gfx-class as member variables/pointer to them, or as suggested by de_mattT, via multiply interface inheritance?

 

 

However in many situations compilation dependencies won't be an issue. In these situations introducing pure virtual classes is often overkill and leads to unnecessary complexity; sticking to the idea that clients determine the interfaces of servers should produce good results.

 

Ah, now I see. It's not that there needs to be an "interface" - its just that my high level classes should dictate how the low level classes work, not vice versa - right? I was probably thinking about it more that there should not be any direct dependency whatsoever...

 

 

Of course there are situations where adding a pure virtual class can improve the clarity of the design, even if the motivation is not breaking compile time dependencies.

 

Well, I don't know really. After interfacing my whole high-level graphics system, I feel that my design has indeed improved, instead of lowered. I see it much easier now also to apply the other patterns of SOLID, like breaking up responsibilities. It just feels there is a lot of less thight coupling, and I don't have to worry about that the actual implementation of class A matters for class B - I have the interface, that dictates how class A must work so that B can use it, and thats about it. After all, isn't this part of what SOLID, especially DIP stands for? I quess there are better ways to achieve this with using explicit interfaces, like you said, but as for now, I think I'm going to stick with them wherever possible, they at least give me a good basis for writing less coupled code, once I've gotten the routine to the SOLID thing, I might step back and remove where they are unnecessary.

 

 

Check out the post by dmatter about half way down the page. I believe your example is also a violation of the Liskov Substitution Principle because implementations of ITexture (and similarly for IEffect) will not be interchangable. I've never tried to create a game / engine with switchable graphics APIs, it sounds hard smile.png. I think Iwould probably be in favor if phil_t's suggestion of compile-time switches.

 

Yes, you are right about that. On the one side, I can somewhat safely use static_cast as long as my gfx implementation takes care that no wrong texture is passed in. On the other hand, this does kill the whole purpose of me trying to make the system SOLID, since one should not rely on my gfx implemenatation for things to work nicely. Note however that this onyl applies to the texture - IEffect works without a problem. That is because no API specifiy information needs to be pulled out of the effect - begin starts, end ends. It doesn't matter whether I implement IEffect to use D3DXEFFECT or a custom vertex and pixel shader, or something completely different.

 

Thinking about it, wouldn't that do the trick? Instead of having a specifiy GetAPITexture-method for the actual implementation, have a "SetRenderTarget" function in ITexture? That would at least kill most cases where I need to cast ITexture, since in the constructor of the actual DX9Texture, I can pass the specifiy device (this goes without problem since on creation of the texture I need to have access to the device anyway and, though I'm not sure, I might already have passed the device for some functionality), and in DX9Texture::SetRenderTarget I can call m_lpDevice->SetRenderTarget(m_apiSpecifiyTexture). Would you consider this a better solution to the current approach? I'd still need to come up with a solution for the IEffect->SetTexture-approach, but partially this would solve my issues, I just don't know if this makes sense from a design standpoint.


Edited by Juliean, 07 April 2013 - 02:48 AM.


#14 Sandman   Moderators   -  Reputation: 2136

Like
1Likes
Like

Posted 07 April 2013 - 06:21 AM

On the one side, I can somewhat safely use static_cast as long as my gfx implementation takes care that no wrong texture is passed in. On the other hand, this does kill the whole purpose of me trying to make the system SOLID, since one should not rely on my gfx implemenatation for things to work nicely.


I don't think you can avoid violating either liskov, dependency inversion, or both, with your current approach to dealing with resources.
As you say, you need to be able to access the underlying types in order to be able to use them for anything, which means either exposing them in the interface (violates both dependency inversion, and liskov) or doing some upcasting somewhere in your engine to get at higher level interfaces (violates liskov).

In any case, you'd kind of expect liskov to be violated anyway. You really can't expect a D3DEngine to work happily if you swap out half the textures with COpenGLTexture. It just plain wouldn't work, regardless of how it gets at the data inside.

The thing is, the details of those texture structures is fundamentally engine specific. And so if you want to satisfy the SOLID criteria, you need to hide these details behind the engine interface.

It seems to me that the way to do that is to have the api specific engine implementation worry about storing api specific texture resources. CTexture stops storing a texture, and instead becomes a resource locator. It can expose functions which act on the texture, and internally it implements that by making a call to the engine, which can look up the texture and perform the actual modification.


// templated for strong typing of resource types

template<class ResourceTypeT> 

class CResourceLocator

{ 

public:

    CResourceLocator(int hash);

    int GetHash() const;

}


typedef CResourceLocator<CTexture> TextureLocator;


class ITextureActionHandler :

{


    int GetWidth(const TextureLocator& textureLocator) const = 0;

    int GetHeight(const TextureLocator& textureLocator) const = 0;


   void SetPixel(const TextureLocator& textureLocator, int x, int y, CColour c) = 0;


}


class CTexture 

{

public:


    int GetWidth() const

    {

        m_pActionHandler->GetWidth(this);

    }

    int GetHeight() const

    {

        m_pActionHandler->GetHeight(this);

    }



   void SetPixel(int x, int y, CColour c)

   {

        m_pActionHandler->SetPixel(m_Locator, x, y, c);

   }


   const TextureLocator& GetLocator()

   {

       return m_Locator;

   }

private:

    TextureLocator m_Locator;

    ITextureActionHandler* m_pActionHandler; 

}



The action handler implementation could be the engine itself - it would then perform a lookup in some internal table of D3DTexture objects to find the specific texture implementation, and then perform the operation on it. The danger here though is that changing your implementation of the engine forces you to somehow notify all your texture instances. However if you created another layer of abstraction (a separate api agnostic handler, which then calls into the api specific implementation) you'd just need to update the intermediate handler to point to the correct engine instance. Then you could switch between D3D and OpenGL on the fly, and not worry about all the texture instances - they would just carry on working with the new engine, as they are now completely engine agnostic.

Edited by Sandman, 07 April 2013 - 01:53 PM.


#15 Juliean   GDNet+   -  Reputation: 2735

Like
0Likes
Like

Posted 07 April 2013 - 06:49 AM

In any case, you'd kind of expect liskov to be violated anyway. You really can't expect a D3DEngine to work happily if you swap out half the textures with COpenGLTexture. It just plain wouldn't work, regardless of how it gets at the data inside.

 

Thats what I thought for a moment, too. The I decided not to post it because I thought it sounded a little too obvious and stupid to me. Seems I was wrong^^

 

 

It seems to me that the way to do that is to have the api specific engine implementation worry about storing api specific texture resources. CTexture stops storing a texture, and instead becomes a resource locator. It can expose functions which act on the texture, and internally it implements that by making a call to the engine, which can look up the texture and perform the actual modification.

 

Ah, that sounds like a brilliant idea! It goes a bit in the direction of what I was actually doing, as for one thing I was indeed storing my textures in a texture module class of my graphics engine. I would reference them by file or specifiy name, and have that module functionality to set a texture as render target, clear it, etc... . Now having those functions on the texture itself so I don't need to use the module for everything just sounds natural to me, too. Having the actual API specifiy texture stored in the texture module and CTexture just use this won't make a huge difference to implement. Then when passing the texture to e.g. the Effect, it would pull out the textures hash, and tell its own action handler to set the texture with that hash to the cBuffer... right? Sounds just like what I've been looking for, thanks!

 

Just a quick question about your code, too:

 

class CTexture : public ITextureAccessor, ITextureMutator

I assume those are the interfaces whose getter / respective setter the CTexture class in your example implements, right?

 

 

Oh, another big question concerning the whole interface thing. I assume you have willingly not made CTexture another interface, right? I don't know how to put that in a question that won't make me sound like I have no idea what I'm talking about, but... well, since that train has left the station: Until now I've always conducted tests of my code by clicking, using etc... every feature I added. Since I've read about test driven developement and belive it to be very useful and making bug tracking way more easy for me, I want to give it a shot in the near feature. Now, I don't want to post a question to a different topic here, but I've heard that TDD is somewhat hard to conduct in its full feashion in c++, and that using "interfaces" whereever possible it one of the main requirements (or to make things easier). Is this true? Unfortunately I didn't find that much info on that topic on google, but since I'm at the whole interface thing, I might ask here as well.


Edited by Juliean, 07 April 2013 - 07:00 AM.


#16 Sandman   Moderators   -  Reputation: 2136

Like
1Likes
Like

Posted 07 April 2013 - 01:52 PM

I assume those are the interfaces whose getter / respective setter the CTexture class in your example implements, right?

 

They were, until I realised they a) didn't actually contribute to the example and b) were pointless, and deleted them - I just forgot to delete them from the CTexture declaration. 

 

Now, I don't want to post a question to a different topic here, but I've heard that TDD is somewhat hard to conduct in its full feashion in c++, and that using "interfaces" whereever possible it one of the main requirements (or to make things easier). Is this true? Unfortunately I didn't find that much info on that topic on google, but since I'm at the whole interface thing, I might ask here as well.

 

Well, it can be difficult when you are trying to unit test objects that have lots of dependencies on other objects, preventing you from testing them in isolation. The interface approach allows you to substitute those dependencies with simple fake versions which implement the interface in a way which allows you to check that the unit you're testing is interfacing with those dependencies correctly.

 

In the case of the CTexture, sure you could chuck an interface in there - although it's pretty easy to mock by mocking the action handler that sits behind it instead. Alternatively, you could use something like hippomocks to mock pretty much any class or interface you like.



#17 Juliean   GDNet+   -  Reputation: 2735

Like
1Likes
Like

Posted 07 April 2013 - 03:43 PM

They were, until I realised they a) didn't actually contribute to the example and b) were pointless, and deleted them - I just forgot to delete them from the CTexture declaration.

 

As in, completely pointless - or just pointless regarding this topic?

 

Well, it can be difficult when you are trying to unit test objects that have lots of dependencies on other objects, preventing you from testing them in isolation. The interface approach allows you to substitute those dependencies with simple fake versions which implement the interface in a way which allows you to check that the unit you're testing is interfacing with those dependencies correctly.



In the case of the CTexture, sure you could chuck an interface in there - although it's pretty easy to mock by mocking the action handler that sits behind it instead. Alternatively, you could use something like hippomocks to mock pretty much any class or interface you like.

 

True that, having all functionality handled by the action handler there really is not much point in having texture as an interface per se.

 

Well, I've spent the last few hours implementing this, I've got it to work so far, only right now I've just applied it to texture and effect, before I get thinkgs to actually render I'd like to hear your opinion on if my actual implementation can be considered "good". I've implemented the action handlers to be API agnostic too. All of the same objects share one texture handler, that is stored inside the texture module, and that itself has a pointer to the texture module itself, to emulate functionality. Here is some code for ya:

 

[[spoiler="ITextureModule.h"]

 

#pragma once
#include "Texture.h"
#include "Vector.h"
#include 

class Texture;
class ResourceLocator;
class ITextureModule
{
public:

virtual ~ITextureModule(void) {};

virtual void LoadTexture(const std::wstring& stFileName) = 0;
virtual Texture* GetTexture(const std::wstring& stFileName) = 0;
virtual void CreateTexture(const std::wstring& stFileName, const Vector2& vSize, unsigned int format) = 0;

/*
API independant texture functionality
*/

virtual void SetActiveTexture(const ResourceLocator& locator) = 0;
virtual const Vector2& GetTextureSize(void) = 0;
virtual void ClearTexture(void) = 0;
virtual void SetRenderTarget(unsigned int index) = 0;

virtual void ClearAllBound(void) const = 0;
virtual void UnbindRenderTarget(unsigned int index) const = 0;
};
 

 

[/spoiler] Note that I, due to performance reasons, did a "SetActiveTexture"-function instead of passing the locator every time. Note too that I didn't strong type ResourceLocator yet, had some very annyoing circular dependency issues, and I havn't got the nerve to bring this right back now *sigh*

 

[spoiler='TextureModule.h']

 

#pragma once
#include "Device.h"
#include "D3DTexture.h"
#include "ITextureModule.h"
#include "TextureActionHandler.h"
#include "ResourceLocator.h"
#include 
#include
#include 

enum DX9TextureFormats
{
TF_X32 = 0, TF_A32, TF_A16F
};

class TextureModule : public ITextureModule {

typedef std::vector TextureVector;
typedef std::map LocatorMap;

public:

TextureModule(Device& device);
~TextureModule(void);

void LoadTexture(const std::wstring& stName);
Texture* GetTexture(const std::wstring& stName);
void CreateTexture(const std::wstring& id, const Vector2& vSize, unsigned int format);

void ClearAllBound(void) const;
void UnbindRenderTarget(unsigned int index) const;

void SetActiveTexture(const ResourceLocator& locator);
const Vector2& GetTextureSize(void);
void ClearTexture(void);
void SetRenderTarget(unsigned int index);

D3DTexture* GetD3DTexture(const ResourceLocator& locator);

private:

Device* m_pDevice;

TextureActionHandler m_actionHandler;

D3DTexture* m_pTexture;
TextureVector m_vTextures;
LocatorMap m_mLocator;

};
 

 

[/spoiler] Note here that the GetD3DTexture-call is only being used in the EffectModule, since both are created in the Gfx3D9 class, and can therefore safely be passed to the actual effect module implementation, without violating liskov or dependency inversion, can't it? I'll say some words to that later...,>*>

[spoiler='TextureModule.cpp']

 

#include "TextureModule.h"
#include "Texture.h"

TextureModule::TextureModule(Device& device): m_pDevice(&device), m_actionHandler(*this)
{
}

TextureModule::~TextureModule(void)
{
for(TextureVector::iterator ii = m_vTextures.begin(); ii != m_vTextures.end(); ++ii)
{
delete *ii;
}
for(LocatorMap::iterator ii = m_mLocator.begin(); ii != m_mLocator.end(); ++ii)
{
delete ii->second;
}
}

void TextureModule::LoadTexture(const std::wstring& stFileName)
{
if(m_mLocator.count(stFileName) == 0)
{
std::wstring stDir = L"../../Repo/";
stDir += stFileName;
D3DTexture* pTexture = new D3DTexture(*m_pDevice, stDir.c_str());
m_mLocator[stFileName] = new Texture(m_vTextures.size(), &m_actionHandler);
m_vTextures.push_back(pTexture);
}
}

Texture* TextureModule::GetTexture(const std::wstring& stName)
{
LoadTexture(stName); //todo: implement different loading
return m_mLocator[stName];
}

const Vector2& TextureModule::GetTextureSize(void)
{
return m_pTexture->GetSize();
}

void TextureModule::CreateTexture(const std::wstring& stName, const Vector2& vSize, unsigned int format)
{
if(m_mLocator.count(stName) == 0)
{
D3DFORMAT fmt = D3DFMT_X8R8G8B8;
switch(format)
{
case TF_A32:
fmt = D3DFMT_A8R8G8B8;
break;
case TF_A16F:
fmt = D3DFMT_A16B16G16R16F;
break;
}
D3DTexture* pTexture = new D3DTexture(*m_pDevice, vSize.x, vSize.y, fmt);
m_mLocator[stName] = new Texture(m_vTextures.size(), &m_actionHandler);
m_vTextures.push_back(pTexture);
}
}

void TextureModule::ClearTexture(void)
{
m_pDevice->SetRenderTarget(0, m_pTexture);
m_pDevice->Clear(D3DCLEAR_TARGET);
}

void TextureModule::ClearAllBound(void) const
{
m_pDevice->Clear(D3DCLEAR_TARGET);
}

void TextureModule::SetActiveTexture(const ResourceLocator& locator)
{
m_pTexture = m_vTextures[locator.GetHash()];
}

void TextureModule::SetRenderTarget(unsigned int index)
{
m_pDevice->SetRenderTarget(index, m_pTexture);
}

void TextureModule::UnbindRenderTarget(unsigned int index) const
{
m_pDevice->SetRenderTarget(index, NULL);
}

D3DTexture* TextureModule::GetD3DTexture(const ResourceLocator& locator)
{
return m_vTextures[locator.GetHash()];
}
 
[/spoiler]

 

[spoiler='Texture.h']
#pragma once
#include "TextureActionHandler.h"
#include "ResourceLocator.h"
#include "Vector.h"

class TextureActionHandler;
class ITextureModule;
class Texture
{

public:
Texture(unsigned int hash, TextureActionHandler* pActionHandler);
virtual ~Texture(void);

const Vector2& GetSize(void) const;

void SetRenderTarget(unsigned int index);

const ResourceLocator& GetLocator(void) const;

private:

ResourceLocator m_locator;
TextureActionHandler* m_pActionHandler;
};

 
[/spoiler]

 

[spoiler='Texture.cpp']
#include "Texture.h"
#include "AclException.h"

Texture::Texture(unsigned int hash, TextureActionHandler* pActionHandler): m_pActionHandler(pActionHandler), m_locator(hash) //todo: important, change!
{
}

Texture::~Texture(void)
{
}

const Vector2& Texture::GetSize(void) const
{
return m_pActionHandler->GetSize(m_locator);
}

void Texture::SetRenderTarget(unsigned int index)
{
return m_pActionHandler->SetRenderTarget(index, m_locator);
}

const ResourceLocator& Texture::GetLocator(void) const {
return m_locator;
}
 
[/spoiler]

 

[spoiler='TextureActionHandler.h']
#pragma once
#include "ITextureModule.h"
#include "ResourceLocator.h"
#include "Vector.h"

class ITextureModule;
class TextureActionHandler //todo: make interface
{
public:

TextureActionHandler(ITextureModule& module);
~TextureActionHandler(void);

const Vector2& GetSize(const ResourceLocator& locator);

void SetRenderTarget(unsigned int index, const ResourceLocator& locator);

private:

void ApplyActiveTexture(const ResourceLocator& effectLocator);

const ResourceLocator* m_pLocator;

ITextureModule* m_pModule;
};
 
[/spoiler]

 

[spoiler='TextureActionHandler.cpp]
#include "TextureActionHandler.h"

TextureActionHandler::TextureActionHandler(ITextureModule& module): m_pModule(&module)
{
}

TextureActionHandler::~TextureActionHandler(void)
{
}

const Vector2& TextureActionHandler::GetSize(const ResourceLocator& locator)
{
ApplyActiveTexture(locator);
return m_pModule->GetTextureSize();
}

void TextureActionHandler::SetRenderTarget(unsigned int index, const ResourceLocator& locator)
{
ApplyActiveTexture(locator);
m_pModule->SetRenderTarget(0);
}

void TextureActionHandler::ApplyActiveTexture(const ResourceLocator& locator)
{
if(m_pLocator != &locator)
{
m_pLocator = &locator;
m_pModule->SetActiveTexture(locator);
}
}
 

[/spoiler] At first I was a little concerned that my "caching" in TextureActionHandler would vialote the dependency inversion principle, but then again, ITextureModule doesn't really depend on the Texture handler, it just implements a way of perfoming better - I think.

 

Rest of the code I'm going to spare you, its basically the same just with special effect paramters. Oh just one more thing, I'd like to show you the Effect::SetTexture-method chain:

 

void Effect::SetTexture(LPCSTR lpName, const ResourceLocator* locator)
{
	m_pActionHandler->SetTexture(lpName, m_locator, locator);
}


void EffectActionHandler::SetTexture(LPCSTR lpName, const ResourceLocator& effectLocator, const ResourceLocator* pTextureLocator)
{
	ApplyActiveLocator(effectLocator);
	m_pEffectModule->SetTexture(lpName, pTextureLocator);
}

void EffectModule::SetTexture(LPCSTR lpName, const ResourceLocator* pTextureLocator) const
{
	D3DTexture* pTexture = NULL;
	if(pTextureLocator != NULL)
		pTexture = m_pTextureModule->GetD3DTexture(*pTextureLocator);

	m_pActiveEffect->SetTexture(lpName, pTexture);
}

//this is EffectModules constructor
EffectModule::EffectModule(Device& device, TextureModule& module): m_pDevice(&device), m_pEffectPool(new D3DEffectPool()), m_pTextureModule(&module), m_actionHandler(*this)
{
}

So, what do you say to the whole design? Any comment would be nice, if you find anything rather poorly executed and/or have a better solution, I'd be glad to hear it. I'm a little worried about all that circular dependencies, but I quess this is normal, isn't it? Plus I'm not sure if passing the Texture Module (m_pTextureModule) to the EffectModule is really the best way. I might compose those into the gfx class, but I just haven't been motivated right now...



#18 Sandman   Moderators   -  Reputation: 2136

Like
1Likes
Like

Posted 07 April 2013 - 05:15 PM

Juliean, on 07 Apr 2013 - 22:42, said:snapback.png

 


As in, completely pointless - or just pointless regarding this topic?

 
Definitely the latter, and most probably the former as well.

 

 

Juliean, on 07 Apr 2013 - 22:42, said:snapback.png

 


Note that I, due to performance reasons, did a "SetActiveTexture"-function instead of passing the locator every time.

 

Premature optimization is the root of all evil. In this case you've sacrificed thread safety. While the original design wasn't implicitly thread safe, it wouldn't be too hard to apply locks before accessing a specific texture. With this design though, you'd have to lock the whole texture module everytime you read or write anything, which is ugly as hell and massively increases the potential for lock contention.

 

Map lookups are pretty quick, and how often do you really need to play around with textures outside the api specific code anyway? We're probably not talking about a massive number of calls per frame here. I wouldn't worry too much about optimizing this until you find the performance to be problematic with profiling. Furthermore, you could cache certain non-mutable, non-api specific data, such as texture size, in the api agnostic texture proxy object.

 

 

Juliean, on 07 Apr 2013 - 22:42, said:snapback.png

 


 

Texture::Texture(unsigned int hash, TextureActionHandler* pActionHandler): m_pActionHandler(pActionHandler), m_locator(hash) //todo: important, change!
{
}

I'd imagined something more like this for the Texture constructor

 

 

Texture::Texture(std::string textureName, TextureActionHandler& rHandler) :
    m_Locator(rHandler.CreateLocator(textureName)),
    m_pActionHandler(&rHandler)
{
}

 
The hash code for the locator will be generated by the actionhandler, probably by hashing off the texture name. This doesn't even require a lookup. It also makes sense that the actionhandler take this responsibility, so it can maintain a class invariant - that a texture's hash code actually points to the texture it's meant to point to. 
 
Also, it's a style thing, but passing the action handler as a reference rather than a pointer prevents you from passing NULL, saving you from a whole class of bugs and all the checking and error handling that goes with it. Make that compiler do the work for you. 
 

Juliean, on 07 Apr 2013 - 22:42, said:snapback.png

 


 

typedef std::vector TextureVector;
typedef std::map LocatorMap;

 

 

I don't really see the need of the locator map. You just need a map of hash/api specific texture, and as I mentioned above, generate the hash off the texture name.


Edited by Sandman, 07 April 2013 - 05:21 PM.
Ugh! formatting fubar'd. I give up


#19 Juliean   GDNet+   -  Reputation: 2735

Like
1Likes
Like

Posted 07 April 2013 - 05:40 PM

Premature optimization is the root of all evil. In this case you've sacrificed thread safety. While the original design wasn't implicitly thread safe, it wouldn't be too hard to apply locks before accessing a specific texture. With this design though, you'd have to lock the whole texture module everytime you read or write anything, which is ugly as hell and massively increases the potential for lock contention.



Map lookups are pretty quick, and how often do you really need to play around with textures outside the api specific code anyway? We're probably not talking about a massive number of calls per frame here. I wouldn't worry too much about optimizing this until you find the performance to be problematic with profiling. Furthermore, you could cache certain non-mutable, non-api specific data, such as texture size, in the api agnostic texture proxy object.

 

Thread safety isn't something I'm currently aiming for, but still, thanks for the hint, thankfully its easy to activate/deactive this caching system on the fly...

 

 

Map lookups are pretty quick, and how often do you really need to play around with textures outside the api specific code anyway? We're probably not talking about a massive number of calls per frame here. I wouldn't worry too much about optimizing this until you find the performance to be problematic with profiling. Furthermore, you could cache certain non-mutable, non-api specific data, such as texture size, in the api agnostic texture proxy object.

 

Awwh, that makes me a little bit worried about my current use of this system. Right now I'd e.g. use one of my component systems to render like this:

 

class GridRenderer : public EntitySystem<GridRenderer>
{
public:
	GridRenderer(IGfx3D& gfx3D): m_pGfx3D(&gfx3D), 
		m_pEffect( m_pGfx3D->effects().GetEffect(L"Game/Effects/Line.fx") ), m_pGrid( m_pGfx3D->meshes().GetMesh(L"debug grid") ), m_pScene(m_pGfx3D->textures().GetTexture(L"final scene") ) {};

	void Update(EntityManager& entityManager, MessageManager& messageManager)
	{
		m_pScene->SetRenderTarget(0);

		D3DXMATRIX mWorldViewProj;
		D3DXMatrixIdentity(&mWorldViewProj);
		mWorldViewProj *= m_pGfx3D->GetCamera()->GetViewProjectionMatrix();

		m_pEffect->SetMatrix("cWorldViewProj", mWorldViewProj);
		m_pEffect->SetFloat3("cColor", 0.8f, 0.8f, 0.8f);

		m_pEffect->Begin();

		m_pGrid->Render();

		m_pEffect->End();

		m_pGfx3D->textures().UnbindRenderTarget(0);
	}

private:

	IGfx3D* m_pGfx3D;
	Effect* m_pEffect;
	Texture* m_pScene;
	IRenderable* m_pGrid;
};

Interpreting what you just said I assume I should handling rendering differently? I mean, my previous implementation looked like this:

class GridRenderer : public EntitySystem<GridRenderer>
{
public:
	GridRenderer(Gfx3D& gfx3D): m_pGfx3D(&gfx3D) {};

	void Update(EntityManager& entityManager, MessageManager& messageManager)
	{
		m_pGfx3D->textures().SetRenderTarget(0, L"final scene");

		Effect* pEffect = m_pGfx3D->GetEffect(L"Line.fx");

		D3DXMATRIX mWorldViewProj;
		D3DXMatrixIdentity(&mWorldViewProj);
		mWorldViewProj *= m_pGfx3D->GetCamera()->GetViewProjectionMatrix();

		pEffect->SetMatrix("cWorldViewProj", mWorldViewProj);
		pEffect->SetFloat3("cColor", 0.8f, 0.8f, 0.8f);

		pEffect->Begin();

		m_pGfx3D->meshes().RenderByName(L"debug grid");

		pEffect->End();

		m_pGfx3D->textures().UnbindRenderTarget(0);
	}

private:

	Gfx3D* m_pGfx3D;
};

I know I have to, and soon will, implement a render queue, but aside from that I assumed this is the way graphics displaying should be handled, basically. What do you suggest instead (if you do so, that is), because if you said that I wouldn't be accessing stuff outside the API-specifiy code anyway... that gives me quite something to think, you know.

 

 

The hash code for the locator will be generated by the actionhandler, probably by hashing off the texture name. This doesn't even require a lookup. It also makes sense that the actionhandler take this responsibility, so it can maintain a class invariant - that a texture's hash code actually points to the texture it's meant to point to.

 

Good point, I currently used the size of the TextureVector from my texture module as "hash", yeah I know, I currently don't even support deletion of loaded textures, so it isn't .that bad (well, except for that non-dynamic resource management :S). As for hashing the name, what would you recommend me? MD5? SHA1/2? Some sort of "home made" solution specially for this "problem"?

 

 

Also, it's a style thing, but passing the action handler as a reference rather than a pointer prevents you from passing NULL, saving you from a whole class of bugs and all the checking and error handling that goes with it. Make that compiler do the work for you.

 

Hah, I clearly missed that. As you can see, I actually do prefer to pass by reference where possible, too. Quess that one slipped me.

 

I don't really see the need of the locator map. You just need a map of hash/api specific texture, and as I mentioned above, generate the hash off the texture name.

 

Yeah, thats the thing. Since I used a linear only-so-named-because-it-sounds-cool "hash", I actually required that map to actually access my textures again after loading them safely. So you'd suggest to create a hash of the texture name on loading - and then, when I later access that file (loading and accessing is a seperate task now, no more lazy loading, the one time in the texturemodule was a slip), simply create a new Texture-object, like this?

 

Texture* TextureModule::GetTexture(const std::wstring& stName) const
{
return new Texture(stName, actionHandler);
}

Well you can call me on premature optimization there too, since I assumed that it was better not to create a new texture handle (can we call it that anyway?) every time I accessed a texture by file name? You could also call me on having currently a stupid messed up rendering system that at least for the entities doesn't store the textures but accesses them per frame per entites (no, not even material sorting currently, yiekes :/) so that would be quite a bit memory creation and deletion per frame. That is, if you are suggesting me to handle it that way as in the code, aren't you?


Edited by Juliean, 07 April 2013 - 05:42 PM.


#20 Sandman   Moderators   -  Reputation: 2136

Like
2Likes
Like

Posted 08 April 2013 - 03:44 AM

Awwh, that makes me a little bit worried about my current use of this system. Right now I'd e.g. use one of my component systems to render like this:

 

1. What are the Begin() and End() calls actually doing? 

 

Since there's no batching or sorting by effect going on, it makes no difference whether you call the Begin() and End() functions outside or inside the render. However if you later upgrade the engine to use rendering queues with batching, these Begin() and End() functions are going to be meaningless.

 

I'd argue that they are violating DI, and should be hidden away behind the renderer interface.

 

2. Do you need to set the colour every frame? Some of this state could be managed in the underlying effect implementation, so you only modify it when it actually needs to change.

 

3. Setting the worldview matrix in the effect might make sense in terms of the implementation details, but does it makes sense from a high level view? If not, it's probably a DI violation.

I think it would be cleaner to change your Render interface to something like:

 

 

void Renderer::Render(const CMesh&, CEffect&, CMatrix worldView);

 

It's then up to the renderer implementation to decide how to put all these things together into a render call, or render queue, or whatever it wants to do with them. Note that I'm also assuming a similar resource proxying mechanism for the mesh and effect classes. You don't actually need to resolve any of the lookups here at all, that can all be deferred to the Render() function's internals.

 

 

Yeah, thats the thing. Since I used a linear only-so-named-because-it-sounds-cool "hash", I actually required that map to actually access my textures again after loading them safely. So you'd suggest to create a hash of the texture name on loading - and then, when I later access that file (loading and accessing is a seperate task now, no more lazy loading, the one time in the texturemodule was a slip), simply create a new Texture-object, like this?

 

Yeah, thinking about it it may be better to stick to using a linear index for the locator. Either way, it's an implementation detail, and so long as it's all wrapped nicely behind the relevant interface, you can tweak the actual implementation details to better suit the real life usage patterns. 

 

 

Well you can call me on premature optimization there too, since I assumed that it was better not to create a new texture handle (can we call it that anyway?) every time I accessed a texture by file name? You could also call me on having currently a stupid messed up rendering system that at least for the entities doesn't store the textures but accesses them per frame per entites (no, not even material sorting currently, yiekes :/) so that would be quite a bit memory creation and deletion per frame. That is, if you are suggesting me to handle it that way as in the code, aren't you?

 

 

I'd envisaged the texture class as being something almost completely engine agnostic, that could be stored in components. You wouldn't need to create them very often at all, and if you did, you could happily create them on the stack (public constructor that takes a pointer to the action handler - or something that can provide a pointer to the action handler) They're pretty lightweight, so it's not like they consume huge amounts of memory. In fact, they are probably going to consume less space than most texture names would, especially if you're using wide character strings. 


Edited by Sandman, 08 April 2013 - 03:46 AM.
Selective Quoting == Formatting Fail





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS