Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 20 Aug 2012
Offline Last Active Today, 12:32 AM

Posts I've Made

In Topic: Game class holding a pointer to Renderer class

29 December 2014 - 02:55 PM

"Same with that. Not everything is a interface. If you inherit the scene, I suppose you pretend to put some logic data inside, but a scene just have drawable actors  (sprites in your case), lights, cameras, etc. Logic is a good canditate to live in the game state or some mechanism out of the scene scope. The scene can have functions like: GetAllObjectsOfType(...), GetAllObjectsInsideSphere(...) (altough this fits better in the physics library).


Soon you'll see that passing the "renderer" pointer around is good for design and tedious for maintenance."


Im very confused about your resoning here. It seems very normal for there to be a scene interface, I could have 2d scenes or 3d scenes and 3d scenes could be octrees for some games, bsp trees for others, or just lists for smaller ones. Ensuring all of them have a render method and implement the scene interface seems like a proper solution.


"The game is an loop that will update N times per frame, where N = FRAME_TIME / FIXED_TIME_STEP.

The low-level libraries belongs to the game engine. "


Then how does the game gain access to them unless I give it pointers to own?


"Your sprite needs to be rendered with an interface not a specialized object – that's why is an interface. I won't give any suggestion here because I don't use SDL, but you're doing it wrong if  you're pretending to have an interface that needs to be static casted in real-time to use the interface; doesn't make sense to be an interface."


Im going to have to disagree here. The Sprite is a implemented renderable for the SDLRenderer and SDLScene. I guess it would make sense to call it SDLSprite or something. It has to have access to stuff from SDLRenderer because it will only be rendererd by SDLRenderer. Its like knowing an OpenGL Renderable won't be rendererd by a DIrectX renderer. However the OpenGL Renderable will need to know some specific functions of a OpenGLRendere to set the proper shaders, uniform variables, etc. Same thing in this case, the SDLSprite needs to know some stuff about the SDLRenderer in order to be able to render itself properly. The problem is more so of should the renderer render the renderable or should the renderable know how to render itself. In this case, it gets pretty messy if the renderable has to render itself but the opposite is true when you start using OpenGL or DIrectX.


"A private destructor? An obligation here? Make them public destructors, propagate on all of them and delete them when the scene is destroyed. You can use shared pointers; search in the forums."


Yeah I could but thats double indirection. In this case its not that important but if I wanted to do an std::vector instead, the difference between a vector of objects vs a vector of pointers to objects is a lot bigger in terms of performance and I want the ability to potentially optimize in the future. BTW, what do you mean by propogating on all of them? The worry is that somewhere outside of the scene code in the game code a destructor will get called when it shouldn't.



"It looks like he is deleting those operators (by making them private), which is a smart thing to do, because otherwise they can be generated by default.


With C++11 this can be done explicitly, by saying:

IRenderer(const IRenderer& other) = delete;

But prior to C++11, the only way you could 'remove' an operator is by making it private, like D.V.D is doing."


Yeah exactly this.


"Yeah I know, but if he'd to do this to every class that should not be copied/instantiated explicitly he'd be writing for a lot of classe for the same reasons. I'm just warning him that such thing is out of scope when trying to solve a problem that's bigger than that.

Intrinsically he could copy the engine, game, etc., when its much more easier to just don't do. What is better than explicitly defining what something can't do is just don't do it when isn't necessary.  If we can't manage how things are working in the engine, we can make any progress further obviously. 
If someone isn't using a C++ 11 compiler (like me) things would get messy by don't knowing what to do (instead of having a global solution that works even old compilers).
Another suspecious spot is that he is creating a private constructor for an interface class, with no members, which in that case can't be instantiated at all.
In his way he is explicity defining that he don't have enough control of the life cycle of the renderer laugh.png (not criticising)."
Im confused, if your not using a C++11 compiler thats how you remove access to copy constructors. This is something lots of people on this site and in the industry talk about (ie Rule of 3, 5). Isn't it always better to explicitly define what a class can or can't do instead of assuming it can't be copied? That assumption could be forgotten over time and cause lots of bugs later if people do start copying. Also, which class is the interface with private constructor? I looked at the code I sent you and only the Sprite class has a private constructor but its not a interface, it implements an interface.

In Topic: Game class holding a pointer to Renderer class

28 December 2014 - 12:27 AM

Woah okay thats a lot too tackle. So I read all the comments and did a bit more restructuring. I think the most obvious thing to tackle is how I store my renderables and how I render them. So as suggested, I got a scene class which goes through all entities inside it and renders them. Every entity or renderable as I call them, can potentially hold its own set of parameters which get executed when rendering (for example, tint the sprite a certain color or skip rendering it). Heres how the interfaces look:

// IRenderer
#include <SDL.h>
#include "Scene Interface.h"

class IRenderer
	IRenderer() {}
	~IRenderer() {}

	virtual void UpdateScreen(SDL_Window* pWindow) = 0;

	IRenderer(const IRenderer& other);
	IRenderer& operator=(const IRenderer& other);
// IScene
#include "Renderable Interface.h"

class IRenderer;

class IScene
	IScene() {}
	~IScene() {}

	virtual void Render(IRenderer* pRenderer) = 0;
// IRenderable
class IRenderer;
class IScene;

class IRenderable
	friend class IScene;

	IRenderable() {}
	virtual ~IRenderable() {}

	virtual void Render(IRenderer* pRenderer) = 0;

So for the renderer, I have the UpdateWindow method which updates the screen of a window. I'm keeping it like this for now because I can easily change this to split screen by having a virtual GetFrameBuffer method, and then I can have 2 or 4 renderers (1 for each screen), and another class which takes all of these and puts them into different viewports. I don't wanna do that yet though since the stuff I have planned to make won't require it so I'll get to it once I got to.


In my actual implementations of these classes (SDLRenderer, SDLScene), SDLRenderer does almost nothing since the details of rendering are hidden from you by the SDL API but I'm assuming that for a OpenGL or OpenGLES app, the renderer would be setting the current shader programs based on the render queue or something. I guess it would also contain the render queue itself and do the sorting. 


Now to address some stuff:


"Sounds like a mega class, and mega classes are bad.

A "game" class is pretty standard but the only real definition usually given for such a class is that it generally:

  • Contains the game loop
  • Possibly owns all the major "subsystems" like the renderer, input manager, resource cache, and so on.

It certainly differs by design but it may actually be one of the smaller classes because it contains little actual logic. Usually things like specific logic are delegated out to game states or objects within game states, i.e. your "playing" state might contain a GameScene or something that represents the 3d world and contains objects. A similar thing can be done with 2d games. But there isn't really a hard and fast rule."


My idea of the game class was to be similar to what you said but it would also be really app dependant. So for simple apps like my current game, its the entire game since the game is super simple. In more complex stuff, it could just be the collection of game states but its always the class that runs the whole game. It has the main loop, access to all the sub systems, etc. I wanted the engine or i guess game framework to be sort of like a library and all I have to do is make a game class, assign it to the app and the rest is done for me. All game specific stuff would be in this class. As of now, this looks like this:

#include "GraphicsCore.h"
#include "InputCore.h"

class Application;

class IGame
	friend class Application;

	InputManager* m_pInputManager;
	IRenderer* m_pRenderer;
	IScene* m_pScene;


	virtual void RunGame() = 0;

	IGame(const IGame& other);
	IGame& operator=(const IGame& other);

	void SetInputManager(InputManager* pInputManager);
	void SetRenderer(IRenderer* pRenderer);
	void SetScene(IScene* pScene);

The setters are called by the application class so that it could pick and choose a renderer based on the hardware or something thats requested by the game itself. Same for all other subsystems although I think the Input would be pretty much the same class regardless of platform.


"#2 There are a lot of resources available with their own reasons for not using a singleton, but all of them comes to the same point: is an software engineering design issue that can lead to pitfalls on the planning of a software, memory, and OOP, by just the fact that they are global and are a unique instance of the class......"


I guess this makes sense. I'm gunna not go with statics anymore however since for editors in the future, you might be required to have multiple renderers or even in my case for split screen. With the static or singleton approach, it wouldn't be possible to do that unless I'd have the renderer class render from multiple viewports and what not. I guess thats a valid option but I feel like its more complicated? I could be wrong. However, I see what you mean of using static methods vs singleton approach.


With regards to the other discussion, I don't include any files from the graphics core into the input core so if I did go through with the static class approach, the objects wouldn't exist so calling graphic api inside the input loop would be impossible.


I didn't respond to every individually but I did read through and am still rereading your posts to better understand what you guys mean. As of now, the structure is making more and more sense. There is still something that bothers me. In 3D programming, your renderer handles giving everything off to the GPU but with SDL, the renderer does practically nothing. Instead, the sprite class becomes bloated with stuff I don't think it should be handling or given access too. SInce its a renderable, I assumed it should know how to render itself but now the render code for the Sprite looks like this:

void Sprite::Render(IRenderer* pRenderer)
	SDLRenderer* pSDLRenderer = static_cast<SDLRenderer*>(pRenderer);

	SDL_Surface* pFrameBuffer = pSDLRenderer->GetFrameBuffer();

	SDL_Rect transform = pSDLRenderer->GetCamera()->GetBounds();
	SDL_Rect newBound;
	newBound.x = m_Bounds.x - transform.x;
	newBound.y = m_Bounds.y - transform.y;
	newBound.h = m_Bounds.h;
	newBound.w = m_Bounds.w;

	SDL_BlitSurface(m_pSurface, nullptr, pFrameBuffer, &newBound);

For reference, this is how the Sprite class looks:

class Sprite : public IRenderable
	friend class SDLScene;
	bool m_Hide;
	SDL_Rect m_Bounds;
	SDL_Surface* m_pSurface;

	virtual ~Sprite();

	void LoadTexture(std::string path);
	void LoadText(std::string& rText, TTF_Font* pFont, SDL_Color textColor);
	virtual void Render(IRenderer* pRenderer);

	int GetPosX();
	int GetPosY();
	int GetWidth();
	int GetHeight();

	void SetPos(int x, int y);
	void SetSize(int h, int w);

	void HideSprite(bool hide);
	bool IsHidden();


The other thing that bugs me is the fact that I cannot make the destructors private since they are stored inside the list container. Is it possible to somehow store them inside the std::list but also make them not have a destructor or is that just a requirement? I want to enforce the fact that you cannot destroy them unless you use the Scene's destroy method which it was assigned to.


Lastly,  I made the input manager insatiable but again, I feel like its a bit useless. I can't imagine a application where I would need multiple input managers, infact that would just make things way more complicated. Is it really recommended to keep it instantiated? 

In Topic: Game class holding a pointer to Renderer class

26 December 2014 - 11:19 PM

To ServantLord:


"Why are they singleton?

Do you *need* them to be global? Global increases coupling.

Singletons are globals in disguise.

*Must* there only ever be a single instance of them?
Singleton doesn't mean "there's only one", singleton means, "I'm going to enforce the fact that there's only one" - is that what you intend? Why?"



Well for InputManager, there doesn't need to be more than one. Here's what it looks like in code:

class InputManager
	static InputManager * pMAIN_INPUT_MANAGER;


	std::list<std::shared_ptr<InputController>> m_ControllerList;

	static InputManager* GetInstance();

	void AddController(std::shared_ptr<InputController> pController);
	void RemoveController(std::shared_ptr<InputController> pController);
	void Update();
class InputController
	static InputManager* pINPUT_MANAGER;

	void AddToInputManager();

	virtual bool VOnCheck(SDL_Event e) = 0;
	virtual void VOnUpdate(SDL_Event e) = 0;
void InputManager::Update()
	SDL_Event e;
	while (SDL_PollEvent(&e) != 0)
		for (auto Iter = m_ControllerList.begin(); Iter != m_ControllerList.end(); Iter++)
			std::shared_ptr<InputController> pCurrent = *Iter;
			if (pCurrent->VOnCheck(e))

The benefit of doing it this way is that wherever I declare InputControllers, I don't need to also have access to the InputManager itself and I don't have to manually add the controller to the input manager. It makes it really nice for defining and using controllers. Without a singleton, I would have to add each individual input controller to some input manager. I guess this is really primitive input, but I can still do stuff like make holding a key only register the event when its pressed, not have it continuous or whatever via the OnUpdate method in the InputController.


"Why do event names change between platforms? That ought to be generalized (when possible!) so the game logic doesn't always have to constantly check if it's a mouse-click OR a finger-click."


Yeah I misspoke, its generalized by SDL so you don't have to worry about it. What I meant is you don't need to have a input manager type for ps3 vs pc. You could just add platform specific events like analog stick events or the shoulder buttons + 4 front keys + dpad, etc.


"Game' is rather vague. Is it handling the logic, or more than the logic? In my own code, my "game" class *owns* everything else. Does your class only do logic?"


Its everything related to the game. So the start menu, the game itself, allocating sprites to render, creating input controllers for the game. I assume that's called GameLogic but it might just be easier to say its the game itself. Everything that's not apart of the rendering, physics, math, input, etc. api's since the game class uses the functionality provided by them.




"Why is the camera class part of the renderer? Why are sprites part of the renderer?


Certainly the renderer needs to know about them, but both the camera and the sprites should be passed in by function call during drawing. It sounds like Renderer does alot more than just rendering. Alternatively, you can "SetCurrentCamera()", and let the renderer hold a copy or pointer to the camera, but not own it. Who knows how many cameras you might want? Maybe the game is split-screened. Maybe the game's gui is drawn with a different camera. Maybe the minimap is drawn with a different camera. Renderer shouldn't have a fixed amount of cameras, and most likely shouldn't own the cameras.

Infact, you don't even want the screen to be owned by renderer, because occasionally you want to pass in a off-screen texture for rendering using the same code paths - for example:

draw the world to (screen)
draw the world to (minimapTexture, zoomed out)
draw minimapTexture to (screen's upper-left corner)



Here I'll show the code for what the renderer looks like, I think it will give a better understanding of whats going on.

class SDLRenderer : public IRenderer
	std::shared_ptr<Camera> m_pCurrentCamera;
	std::list<Sprite> m_SpriteList;


	Sprite* CreateSprite();
	void DestroySprite(Sprite* pSprite);

	void SetCamera(std::shared_ptr<Camera> pNewCamera);
	std::shared_ptr<Camera> GetCamera();

	virtual void Update();

The way I have it now is sort of like how Box2D does it with their world class and their body class. You only get pointers to the body class because they are allocated into a list of some sort somewhere for optimization reasons. It ensures I can create a container that will enhance performance and it lets me loop through all sprites when I render them internally in the SDLRenderer. This way, I know for sure all visible sprites in the world will be rendered and I won't miss any because I forgot to add them to the SDLRenderer. I modified this class a lot from singleton to non singleton and I don't know a way to get it right. This way actually doesn't compile now because I store the Sprites in a list but the sprites constructors and destructors are private, to ensure you only create Sprites via the SDLRenderer methods. Unfortunatly, std::list requires a constructor/destructor for the type its containing so its still under construction I guess. In this case, I should modify the renderer to not own the window but a surface or texture which gets passed to the app but if I wanted split screen, I guess i'd need two renderers but I see how that conflicts since each holds a list of sprites. So should I have some container class which holds all renderables which gets passed to renderers for drawing every frame?


"Why not? Some classes do have actual dependencies."


I guess, but like with the input manager, its far more convenient if possible to have the sprites just add themselves to their manager so that I don't need to worry about forgetting to do it on my own.


"Why does the Camera know about the renderer? Cameras don't do renderering. The renderer should know about the Camera, though, because it requires the camera to draw things in the correct places."


Well in my case, it only knows about the renderer to call the makeCurrent() method so that the camera is made the current camera that is being used for rendering. 


"Ah, so it's the Application class that is the owner of everything. Except the singletons (but why not? they don't need to be singletons)."


I guess they don't but like said above, it makes it easier for the InputControllers to know about the InputManager so that they automatically add themselves to it upon creation.


"Now hold on, you're never (99.99% of the time) going to want to have different types of renderers running at the same time. You aren't going to have OpenGL and DirectX drawing at the same time. As far as abstracting platforms go, that happens either through a factory (if you must), or more likely, at compile-time with #ifdefs.

The renderer should present the same interface regardless of platform, but the guts of the renderer should change depending on the platform (though, SDL handles all this for you, even on mobile platforms, so you don't really need to reinvent the wheel).

Ultimately, your highest priority is finishing the project and making it fun and polished. A perfectly-coded application that never gets finished is worse than a poorly-coded application that does get finished. But clean code *is* important, so it's good to think about it and work towards it, just not at the expense of dragging on the development time for all eternity."


Yeah I understand that, I didn't mean theres multiple types of renderers active at the same time, just that there are different types of renderers for different hardware. For the polishing the game part, thats basically why I made a game first and then decided to clean up the code a bit. My main goal was so that I could have one project which has the shared code and then I'd have individual projects which add it as a dependancy and make their own games out of it. I didn't wanna copy and paste a ton of code over and over but before I did that, I wanted to clean out some stuff since it seemed misplaced.



To Irlan:


"The class that manages shared sprites should use the "renderer" to buffer the loaded data to the VRAM.

A camera is part of a scene or the game state, and holds information such matrices, and since the game and the scene are both part of the engine, so do the camera."


Hmm okay so I should create a scene class then which the renderer takes as a parameter for its update or draw method.


"Instead of using singletons, use static member functions and variables, because they are acessed just by the scope of the class and have their behaviour well defined."


Doesn't that still require a global to exist but instead of having it public you keep it private and give functionality via static methods? Thats still a singleton though right?

"The camera never changes the state of an renderer. In your case, the game state should manage the camera switch. Enumerate the cameras in the game state and perform the camera switch. Much better, let the map hold the camera and do something internally. But the map still having generic information of the game, so change your strategy."


I was always confused about what people meant about the state of the renderer. Would that be like culling and rendering? For 2d graphics it only seems like 2 states. Unless I'm misunderstanding it. Also, can you elaborate the meaning of game state? I thought that was just title screen, pause menu, quitting and running. In my case, the game does handle the switching of the camera but I just didn't want it to hold a pointer to the SDLRenderer since that seemed unnecessary and better done via methods like makeCurrent in the camera class.

If you're trying to make an engine non-complex you're doing it wrong. An game engine is one of the small part of science applications that has infinite options to get complex as you get better on it.

"I forgot to say about input.
Do you plan to use pooling or interrupt methods? Since you're using SDL, I suppose that its must be using pooling internally.

If you want to switch to interruption methods you'll need to drop SDL and code by yourself. Search pooling vs interrupts, and you'll se that interruption methods apply better to asynchronous almost-instant events. But is up to you to get things more complex."


I've actually never heard of that so I'll give it a search. Thanks!!

In Topic: Uniform Buffer Confusion

28 October 2014 - 11:54 PM

Hey, sorry for such a late reply. Midterms came along and ate way more time than I thought they would. 


So using the shader script, you assume that the given parameters, block names, and what not match up with the required uniform buffers for a shader program. It also seems like your shader program stores a list of buffers that it requests from the Renderable, and then binds them to the proper indices (which I’m probably going to do too, seems to work really well). Okay so I did some wrapping and implementation and got my GLShaderProgram class to support my GLUniformBuffer class. As shown before, my Uniform buffer class stores a class called UniformBufferAttributes which holds the name of the Uniform Buffer, the size of the uniform buffer, and the array of offsets. Originally, I also wanted GLShaderProgram to store a list of UniformBufferAttributes which it would then use to compare each uniform buffer thats being binded. That way, I could verify before binding whether or not I'm passing the properly encoded uniform buffer to the shader instead of passing and then waiting for an error. 


I still think this process has some benefits since the drivers might let the Uniform Buffer shader run even if it’s missing some of the parameters. As an example, if I give 3 vertex locations in my shader but only have 2 in the actual data + only 2 enabled using glEnableVertexAttribute, the program still renders the cube correctly on my hardware. I'm assuming other driver implementations might throw an error so I'm debating keeping Uniform Buffer attributes just for proper error checking.


On the opposite side of the argument, storing the offsets to do comparisons each frame for each UBO is probably a lot of extra overhead just for error checking, so instead of having uniform buffer attributes, I might be better off with only keeping the UBO name and letting OpenGL throw the error itself. I'm not sure if it will consistently come up on glGetError if not enough parameters are given to the UBO or if it will just give 0 values and run.


As for the shader scripting part to give parameters to UBO's, I think I might go with having some kind of special Uniform Buffers. My idea now is that via shader script (which I'm assuming is just some XML file stored with a model we are loading), I can give init values for all my required parameters, however certain UBO's will change every frame. An example is a UBO for orientation, it needs to have the orientation updated every frame so we need a way of distinguishing it from other UBO's.


Lastly, about the example of switching lighting from Phong to Physically Based to Toon, I think in my implementation, it would make sense if the shader script had a uniform buffer specified for each case. So then when the shader requests a uniform buffer, it just pulls the Toon shader one vs the others since the names could be different. Then as a backup, there could be a default uniform buffer created by the GLShaderProgram if no toon shader buffer is provided.


I think I'm starting to get how the whole thing should work :D

In Topic: Uniform Buffer Confusion

20 October 2014 - 01:38 PM


Okay so basically you avoid the whole problem of having to get a uniform buffer's index by making sure all shaders follow a naming convention for their uniform buffers. But what happens when I have a model which in the game, had something happen to it where before it was being rendered by a certain shader program but now were making it rendered by another shader program. This new shader program takes in 4 uniform buffers while the previous one took in 3. Do you just assume that while the renderable was being switched from one shader program to another, that whatever switched it knew it had to provide a new uniform buffer?

A shader script is part of a rendering pipeline. As such a specific script has to do what the pipeline needs at the stage where the script is used, and the script can use whatever the stage (directly and indirectly) provides but nothing more. Moreover, there is a correspondence of the scripts functionality and the parameters provided by the uniform buffers. For example, if a game entity should be rendered by using the Gooch NPR technique, the entity must provide the suitable parameters in its "material" block. Switching over to the Toon shader requires the material block to be switched, too.


In fact I don't try to avoid requesting indexes of uniform blocks (which can simply be requested once and stored for later use). Instead I define how scripts have to work in specific situations. So I know which names to use for the uniform blocks of scripts of a specific type. If a particular script can be replaced by another one where the latter uses an additional parameter block, then it means that the former script simply does without it because it doesn't need it although it could use it. In other words, I don't allow for arbitrary scripts because they would not fit into the bigger architecture.


Okay I'm slightly understand this. I haven't seen a shader script before but from what I understand, you get the parameters for each uniform buffer from the shader script which also describes how the uniform buffers must be set when you switch from one shader script to another. But then you need to specify how a shader script must populate its uniform buffers from all possible switches between shader scripts. So switching between toon/phong/physically based shading requires each shader script for toon/phong/... to have info on how to set its uniform buffers when switching between any of the other types of shaders. Doesn't that kind of reliance on other shader scripts cause a lot of potential for errors or am I completely misunderstanding this?




I see, it still doesn't make sense how you would know if the uniform buffers provided fulfill the shader program requirments.



Personally, if the shader does not contain the specific uniform buffer name (using glGetUniformBlockIndex), I ignore it/send a warning to the console.


Moreover, my renderer is a deferred renderer so I more or less know exactly what kind of shaders I have at any given time.


Sorry I should have rephrased it, what happens if you don't give a required uniform buffer for a shader because the object being rendered doesn't have it? In that case, you can't figure out if one of the uniform buffers needed for the shader hasn't been given since you can't query the names of all uniform buffers in a shader. Its weird that I can't query that since I can do that for uniforms, is there maybe a variant of glGetProgramiv that lets me get all the uniform buffer names in a shader program?