Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 30 Jun 2002
Offline Last Active Nov 13 2013 10:38 AM

Topics I've Started

Game Entity Organization

26 November 2012 - 08:24 AM

I am returning to a bit of game development after about five years away. Back then, I was lucky enough to get a job without having a portfolio of samples. However, I always started something and never really saw it through. Now that I am an experienced developer (albeit, with the bulk of my career spent in C#) I decided I would try to complete something fairly simple. The first thing I am creating is Pong, probably the simplest thing that I could think of. I am using C++ and DirectX and trying to write much of it myself - while I am relying heavily on STL and Boost, I'm not using middleware for physics, etc.

Enough setup- the question I have is around organizing two types of entities: drawable entities and collidable entities. I have a renderer interface: IRenderer and then an implementation called DXRenderer which is for DirectX 10.1. This isn't for runtime late binding purposes (the projects in which they reside are static libs, not dlls) but for separation of concerns purposes. My Game class (from which I have a Pong subclass) accepts an IRenderer* and registers with it IDrawable*s. Per frame, IRenderer iterates through all of its IDrawables and - if they are not hidden - the renderer calls IDrawable::Draw(IRenderer* pRenderer), passing itself it.

Now, this achieves a fairly pleasant separation of concerns, but I'm now a bit worried about the amount of work I've just created for myself. So far, I have only two IDrawable implementations: StringDrawable and RectangleDrawable. IRenderer thus has two methods corresponding to these: IRenderer::DrawFIlledRectangle(...) and IRenderer::DrawString(...). The two issues here are: having to add another method almost per IDrawable interface, and having to open up each method sufficiently to support a plethora of rendering options (so far, you can't even choose what color you want to render the rectangle or string - it's just a black brush from d2d).

The second issue is pretty similar, in that it's about me painting myself into a bit of a designerly corner. The Game class has a CollisionDetector, into which games can register ICollidable objects. Per frame, the CollisionDetector iterates through all of the moving objects and checks if they have collided with something (ie: it iterates through everything else per object, this feels a wee bit heavy-handed, but is in no way slowing things down so far- still takes <1ms per frame). ICollidables only support bounding Rectangles at the moment, so one issue is how I would be able to support multiple types of bounding shape (well, for Pong I only really need rectangles and circles, but still...) The next issue is about how I go about responding to a collision. CollisionDetector retrieves the bounding rectangle of two ICollidables and finds their intersecting Rectangle, if one exists. I then pass the primary object the intersecting rectangle and a pointer to the ICollidable that it hit.

If we take the example of the player's paddle moving vertically in the playing area, I have created two Wall objects: _floor and _ceiling. These are ICollidables. Currently, Paddle::Collided(...) is called and the top of intersecting rectangle is the same as the top of the Paddle, then I "know" we're in the _ceiling and need to move down the same distance as the intersecting rectangle's height. Conversely, if Paddle::Collided(...) is called and the bottom of the intersection is equal to the bottom of the Paddle, the I "know" we're in the _floor and need to move the Paddle up the same distance as the intersecting rectangle's height. Clearly, this is not going to apply to many more scenarios. Eventually, I'm going to have to pass more context into the Collided(...) method, or request it from ICollidables.

Also, I'm not too happy about doing Euler integration on the velocity to get the position of the paddle, but something like RK4 is going to be massive overkill until I can justify it. I'm doing this on evenings and at weekends in between the myriad responsibilities I have, so each change I make can't really take more than an hour without seeing some tangible results. However, is this kind of 'corrective' collision detection still the norm? Ie: after the Game's Update method is called, objects have already intersected so enact some kind of restorative hack to make things look right. I far prefer the idea of predictively detecting collisions, but I guess that's also overkill for now. I do (eventually) wish to make more than just Pong with this framework, though, which is why I'm going to so many pains as to make things extensible and `neat`.

Thanks, and apologies for the rambling wall of text.

-Edit: Here's some code to illuminate what I'm talking about-

// Update loop in my Game base class
void Game::Loop(double time)

// Renderer iterates through IDrawable*s and asks them to draw themselves...
void IRenderer::RenderFrame(double time)
    std::for_each(this->_vpDrawables.cbegin(), this->_vpDrawables.cend(), boost::bind(&DrawDrawable, _1, this, time));

void DrawDrawable(const IDrawable* pDrawable, IRenderer* pRenderer, double time)
    pDrawable->Draw(pRenderer, time);

// I need an implementation of IDrawable for everything that can be drawn. At the moment, all I can do is render black filled rectangles...
void RectangleDrawable::Draw(IRenderer* pRenderer, double  time) const
        pRenderer->FilledRectangle(_position.Left, _position.Top, _position.Right - _position.Left, _position.Bottom - _position.Top);

//...and black strings:
void StringDrawable::Draw(IRenderer* pRenderer, double time) const
        pRenderer->String(_position.X, _position.Y, _text);

//To use this in a game, I need to register something as drawable:
Pong::Pong(float width, float height, IRenderer* pRenderer) :
    _screenExtents(width, height),
    _playArea(width*.05f, height*.05f, width*.95f, height*.95f),
    _floor(Math::Rectangle<float>(.0f, height*.95f, width, height)),
    _ceiling(Math::Rectangle<float>(.0f, .0f, width, height*.05f)),
    _playerLeft(width*.05f, height*.05f),
    _playerRight(width*.95f, height*.05f)

//This does mean that Player is IDrawable, but it just delegates to its components, which will (in turn) delegate to either RectangleDrawable or StringDrawable, of which they are composed:
void Player::Draw(IRenderer* pRenderer, double time) const
    _paddle.Draw(pRenderer, time);
    _score.Draw(pRenderer, time);

Collision detection is much the same story:

// I'm sure this is going to be hugely inefficient at some point:
void CollisionDetector::TestForCollisions()
    for(CollidableCollection::iterator i(_vpCollidables.begin()); i!=_vpCollidables.end(); ++i)
            for(CollidableCollection::iterator j(_vpCollidables.begin()); j!=_vpCollidables.end(); ++j)
                if((*i) != (*j)) // don't test for collisions between ourselves
                    Rectangle<float> intersection;
                    if((*i)->GetBounds().GetIntersect((*j)->GetBounds(), intersection))
                        (*i)->Collided((*j), intersection);

// As we saw before, Player implements ICollidable, but again it delegates to components:
void Player::Collided(ICollidable* pCollidable, const Math::Rectangle<float>& intersection)
        _paddle.Collided(pCollidable, intersection);

// Paddle handles the collision, but this also feels like I'm doing only what will work for this specific case - I've tried to design a general solution but there is still too much knowledge that is apropos of nothing:
void Paddle::Collided(ICollidable* pCollidable, const Math::Rectangle<float>& intersection)
    const Rectangle<float>& position(_position.GetBounds());
    if(intersection.Top == position.Top)
        // we have intersected with the ceiling, move down...
        _position.Move(.0f, intersection.Bottom - intersection.Top);
    else if (intersection.Bottom == position.Bottom)
        // we have intersected with the ceiling, move up...
        _position.Move(.0f, intersection.Top - intersection.Bottom);

Rubik's Cube Expert System in C++

22 May 2007 - 09:57 AM

I have developed a graphical Rubik's cube renderer that I now hope to use as the front end to a Rubik's cube solver. The point would be to simulate X random twists of a Rubik's cube and present it to a solver expert system. This would then create a list of twists which would solve the cube and the renderer would leisurely traverse this list, solving the puzzle. Of course, I realise I could probably just reverse the twists, but where's the fun in that? The problem I am having is converting Prolog style solvers (such as this one - http://www.amzi.com/articles/rubik.htm) into C++. I have some, limited, knowledge of Prolog from a module on the final year of my degree. However, I haven't quite figured out how to go about writing the C++ equivalent. My main assumption is that I will have to have alternative data structures to represent the cube for the solver in comparison to that for the renderer. Anyone seen anything similar that my searches may not have uncovered?

Rendering Q3 BSP

06 October 2006 - 01:35 AM

I'm writing a renderer for Q3 BSPs. Sadly, I'm off to a bad start. Loading in the BSP is fine, all my sanity checks work ok. When I come to render it using D3D9, I just get a black screen. The clear colour is magenta so I suppose it's positive that *something* is being drawn. I've tried to narrow down all the candidates that I can and here's what I've got. 1) The fvf could be wrong. I've specified D3DFVF_POSITION | D3DFVF_NORMAL | D3DFVF_DIFFUSE | D3DFVF_TEX2. That's also the order I've declared the vertex, which is where it matters: struct cBspQuake3Vertex { float position[3]; // x, y, z float normal[3]; // x, y, z unsigned byte diffuse[4]; // r, g, b, a float texture[2][2]; // texU, texV, lightMapU, lightMapV }; 2) Lighting is off (because the color is stored in the vertex). 3) Z Buffer is enabled and is cleared each frame. 4) I've set the world, view and projection matrices. The view is set at the player spawn for Q3DM1, the map I'm using. 5) I've converted Quake's coords to DX's LH system (I just swap y and z in position, right?). 6) Cull mode is set to none. 7) I've dumped the whole vertex lump into a vertex buffer and I'm using the meshVerts lump as an index buffer. This is suspect to me - why does Q3 use ints for this when D3DFMT_INDEX32 isn't recommended? Is this where I'm going wrong? Ok, I realise there's a good 1,000,000 things that could be causing this, can anyone think of the most obvious that I might have missed. Cheers, ZdlR

Noob STL queue question

21 August 2005 - 12:11 PM

Ok, I'm fairly proficient with C++ but I have limited experience with the STL. I'm writing something to a spec and I've been told I *must* use the STL queue to store some objects. So far, so good. However, am I right in thinking that I can't traverse the queue without destroying it!? How am I supposed to print it like the spec dictates? I'm confused, especially as it seems there's no std::queue<t>::iterator for me to play with. So, where I'm at now:

    std::cout << (*(incomingPackets.front()));
    std::cout << std::endl;

The second time that's called, it fails an assert after the first object is printed! [bawling] [Edited by - zdlr on August 21, 2005 6:34:47 PM]

Visual C++ 2005 Beta - can't set include paths

12 August 2005 - 09:58 AM

Erm, I could be just being incredibly dumb but I can't seem to set my include paths in the Visual C++ Net 2005 Beta: Anyone know what could be up, I kinda think this is mission critical to the application's use!