Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 20 Aug 2012
Offline Last Active Aug 25 2014 09:05 PM

Topics I've Started

Game Actor and System Architecture

01 July 2014 - 10:10 PM

Hey guys, I've been looking through some books and online on the topic of game engine architectures and how actors factor in. A big one was from this thread right here (http://www.gamedev.net/topic/617256-component-entity-model-without-rtti/page-2). The way I understood it is like this:


Actor Component: Defines some relatively independent data that represents some isolated attribute of a larger being. For example, a actor for a gun might have a component for the gun's model, a component for the amount of ammo, and a component for the damage properties of the gun.


Actor: A list of actor components.


System: Runs the game logic, has a list of actors on which the system operates. An example, the physics system has a list of actors that have a physics object which it uses to check for collisions and notify's the actors and their components when a collision happens.


This is where things get kind of shady. A system is supposed to carry out game logic but it doesn't make sense for all the game logic to be done in a system. Using the physics system example, it makes sense for the system to find collisions but when a collision happens, it doesn't always mean calculate the reflection of both objects. Sometimes, I might be colliding with ammo so I should be picking it up instead. Stuff like that doesn't make sense to be done in the system but rather in the actor/their components. 


This works nice but then it makes defining the components a bit more iffy. If the ammo actor is supposed to have some way of reacting to a collision, how does the physics system know which component it should be looking for? There might only be one type of component that is a physics collision model which could describe the collision model for the ammo, but that same component could be used for a rigid body on another actor which should react by physics laws to a collision. 


So the way I understand it, here is how it roughly looks right now:

class Actor
    std::vector <IActorComponent> m_ActorComponents;

class IActorComponent
    // will be overridden and will have some new properties
    virtual bool VInit ();
    virtual bool VDestroy ();

class ISystem
    virtual void VInit ();
    virtual void VUpdate (unsigned int deltaMs);
    virtual void VDestroy ();

And here is a implementation:

class CollisionModelComponent : public IActorComponent
    std::vector <Vertices> m_VertexArray;

class PhysicsSystem : public ISystem
    std::list <Actor> m_Actors;
    void VUpdate ()
        for every actor
            if actor collided
                // What do we look for here? How do we know to run ammo collision response or rigid body response?

You could make a collision response actor component which tells the physics system how to respond to a collision but then you have a issue where the ammo collision response has to have access to the ammo component.


In my code, the actors are created from xml files and each actor is created the same through a factory class. In it, I loop through all the nodes of a xml file and apply the properties to the given component at hand. All components override the virtual VInit function which takes no parameters. If I wanted to create a dependancy between ammo component and collision response component, I would need to somehow pass the ammo instance to the collision response through the init but not all components need a dependancy so it doesn't make sense to have it by default pass a pointer to some actor component through VInit. There could also be cases where we have multiple dependancies which complicates the process.


Is there another way to do this or some way to restructure or apply constraints in order to make this architecture work? It's a really clean architecture if one where to be able to make everything separable. Any help?

Abstract Class Deriving Upwards

19 June 2014 - 10:00 PM

Hey guys,


I have a question regarding casting abstract classes to their derived counterparts. I'm trying to upcast from the base class to the derived class but both classes are encapsulated in smart pointers. Here's an example:

#include <memory>
#include <string>

	class Shape
		//std::string * type;
		virtual int Area ();

	class Box : public Shape
		//char * m_pBuffer;
		virtual int Area ();

	int Box::Area ()
		return 2;

	int test ()
		std::shared_ptr <Shape> pShape (new Box ());
		std::shared_ptr <Box> pBox = static_cast <std::shared_ptr <Box>> (pShape); // no conversion
		std::shared_ptr <Box> pSecondBox = pShape; // no conversion

		Shape * pStrongShape = new Box ();
		Box * pStrongBox = static_cast <Box*> (pStrongShape); // works

In the test function, at the beginning, i'm trying to upcast from Shape to Box by using the static_cast but I get an error telling me that there is no suitable conversion between the two. I can't use dynamic_cast either since it tells me to dynamic cast, the type must be of type pointer. What would be the proper way of achieving this using smart pointers? Shouldn't static casts do the conversion properly for me? Or am I forced to convert between normal pointers to smart pointers in order for this to work? 

File Loading Adds Weird Characters to the end

19 May 2014 - 04:13 PM

Hello everyone, I got some code that loads a text file into a char* buffer and for the most part, it works! The issue is that it adds some weird extra characters to the end. Here is an example:




In this example above, the text file has the words "WORKS!" and the char buffer allocated is taken from the size of the file which in this case, is new char [6].


Here is the code:

bool DefaultResourceLoader::VLoadResource (std::shared_ptr <ResourceHandle> pResourceHandle)
		std::ifstream loadedFile;
		loadedFile.read(pResourceHandle->GetWritableResourceBuffer(), pResourceHandle->GetFileSize());
		if ( loadedFile.fail() )
		return true;

GetResourceName returns the file name as a string. GetWriteableResourceBuffer returns the char* buffer which is of size of the file, in this case, new char[6]. GetFileSize also returns the file size so 6 for this example.


Is there any reason why those characters get printed out? I'm using unicode in my VS2010 settings, and I tried type casting it to char* after, or creating a string for it and printing it. I also tried printf and cout but both result in the same thing.

SDL 2.0 and String Constructor Errors

19 April 2014 - 07:39 PM

Hey guys, so I came across some weird errors. I had a class that would keep calling its destructor right after the constructor and I'm not sure why. I ended up getting to this:

class test
	glm::vec2 screen;
	GLuint VBO;
	SDL_Window * win;
	SDL_GLContext g;
	test (std::string Name);
	~test ();

test::test(std::string Name){


This call does absolutely nothing yet whenever I call it, my destructor is called right after. Whats weird is if I don't have the string as a parameter (but I could have other data types), the destructor isn't called. Does anyone know why this happens specifically with strings?


The last question I have is when I make my constructor look like this, it crashes:

test::test (std::string _Name, unsigned int screenx, unsigned int screeny) : 
Name (_Name), 
screen (glm::vec2(screenx, screeny)), 
win (SDL_CreateWindow(Name.c_str(),100,100,screen.x,screen.y,SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL)), 
g (SDL_GL_CreateContext(win))

Assuming I made a new member of type std::string called Name (which I did), why isn't it initialized properly in the initializer list? The only reason it crashes is because I use Name.c_str() in the SDL_CreateWindow which leads me to believe its undefined but why should it be if I set it to the parameter? (BTW, the only difference between this and the top is Name (_Name) and the Name.c_str() in SDL_CreateWindow). Also, I'm using VS2010 if that makes a difference.

Implementing Meagher's Octree Renderer

15 April 2014 - 10:45 PM

I have a question about Donald J. Meagher's paper called Efficient Synthetic Image Generation of Arbitary 3D Objects. The paper for those who don't know is a method of rendering octrees or octrees of voxels written back in the days when floats were too costly to use on CPU's. The algorithm pretty much goes like this:


1. Recurse through your octree in front to back order

2. If octree node is visible, project onto a quadtree of the screen

3. Find which nodes are intersected by the node

4. If that spot of the screen is free, draw the node


Part 3 and 4 is where it gets confusing for me. He mentions some sort of overlay algorithm where you make 4 bigger bounding boxes around the bound of the projected node and use this to test against the quadtree. He also tests I guess the remaining nodes against the bound of the projected node as well as lines created from the silouhette lines (edges) of the projected node for intersection.


To me, a lot of this seems redundant for example, whats the purpose of this overlay algorithm if he already checks the bound of the projected node against the quadtree nodes? Also, how does he get the line formula without doing division? Isn't a division needed for the slope?


The way I thought about implementing was also to use the bound of the projected node which would give you something like this:




At this point, you would recurse through the quadtree until you got a list of all the nodes intersecting the bound of your projected node:




Here's where im sort of lost as to what I should do.Although I have a list of all nodes intersecting the bound of the projected node, they don't always encapsulate the actual projected node as shown by this picture:




So do I do the same thing as he does in terms of making line formulas for each edge of the faces and then checking if the node is on other side of the line? Won't this be computationally heavy just to figure out where a shape sits in terms of the screen quadtree? If I intersect 12 nodes like in my drawn case, I would have to do 144 line checks (4 lines per face * 3 faces * 12 nodes) + more for however many times I subdivide to get down to the pixel level. I could test all of the nodes against the 6 edges of the projected node but then I wouldn't know which pixels correspond to the face of the octant which is important if each octant face has a different color.


I also was wondering if there would be a way of figuring out which faces are not on screen without projecting them all? I know it should be possible since you only have a few possible cases (3 or 2 faces are actually projected). Or can this be done with some kind of distance tests?


And also, if someone understands how the paper describes doing this, that would be of great especially the overlay algorithm he uses.