Jump to content

  • Log In with Google      Sign In   
  • Create Account


Game Engine Layout


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 Danicco   Members   -  Reputation: 449

Like
0Likes
Like

Posted 14 July 2013 - 05:17 PM

I'm finishing up my first engine layout for OpenGL, and this is what I got so far:

 

//This has info that's needed to draw images in the shader

[Material]

.char[3] diffuse, specular, ambient; //rgb that will be used in the shader

.char* textureData; //the rgba data from an image

.char* normalData; //the normal map from an image

.int textureID; //the textureID that will be filled when it's loaded in the GPU

 

//A mesh is the data to draw an object in screen

[Mesh]

.float* vertexes;

.float* normals;

.float* uvs;

.int* vertexIndexes;

.Material material; //A mesh can only have a single material

.MeshType meshType; //this will be an enum that will specify which shader to use, if it's glass, water, or something that need some specific shader effect

.int vboID; //the ID that will be filled when it's loaded in the GPU

 

//This resource object will contain all data about a single object, such as a "House"

[ResourceObject]

.string name; //a name to identify it, not sure if it'll be needed

.Mesh* meshes; //the resource object can have multiple meshes, such as a "door", "window", "walls", etc

 

//This is the class that will be responsible for loading and unloading objects from disk in the RAM

[ResourceEngine]

.list<ResourceObject> listObjects; //a list of all objects that are currently loaded

.list<Material> listMaterials; //a list of all materials that are currently loaded

.LoadObject(int count, ...); //load X resources in the list, will use "string" to identify them

.LoadMaterial(int count, ...); //same as above, but for materials

.UnloadObject(int objectID); //unload an object from the list

.UnloadMaterial(int materialID); //unload a material from the list

 

//This is the class responsible for loading the objects in the GPU and manage shaders

[GraphicEngine]

.list<Shader> listShaders; //a list of shaders, not sure if this will be needed

.LoadObject(ResourceObject* object); //load an object in the GPU, will fill the "vboID"

.LoadMaterial(Material* material); //load an material in the GPU, will fill the "textureID"

.Unload(int vboID); //unload a VBO

.Unload(int textureID); //unload a texture

.CompileShaders(string shaderCodes); //this will compile the shader, and fill the list and specify an ID to it, which an object will be able to call it if needed

 

//This class has functions specific to each OS that I'll be using, such as the currentTime or read/write files, and all other classes will access it

[OSFunctions]

.GetCurrentTime(); //returns the float of the currentTime

//Other functions that I'll be filling as the game requires

 

//This class will keep track of the game's ACTIONs and playerInput

[InputEngine]

.keyboard[256]; //For each key in the keyboard, has a bool to check if it's pressed or not

.mouse[8]; //Same for mouse, and mousewheel

//I'm thinking in moving these two to OS, since mobiles have different controls and it'll depend on the target platform the game will be

.list<Actions> listActions; //A list of all actions in the game, loaded from a resourceFile as strings such as "PLAYER_MOVERIGHT"

.list<Actions> playerInput; //A list of all actions input by the player, with timestamps

 

//This is an actual on-screen object, it uses a resourceObject data and there can be multiple of them on screen

[GameObject]

.ResourceObject* resourceObject; //the data of this object

.float positionX, positionY, positionZ;

.float directionX, directionY, directionZ;

.float speedX, speedY, speedZ;

//other variables needed depending on the game

 

//This is the main class where most of the code will be

[GameEngine]

.SceneGraph sceneGraph; //basically a list of all GameObjects currently on the level

.Initialize(); //empty function that's called on startup, will be filled differently for each game

.Update(); //empty function that's called every frame

.Draw(); //function that gets a list of objects that "can be seen" from the sceneGraph, and draws them

 

So, my game would be something like this:

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
    OSFunctions.Initialize(); //this will start the window based on which OS I'm in
    GameEngine gameEngine;
    gameEngine.Initialize(); //this function will load all stuff needed and set the default values for everything

    while(game.isRunning == true)
    {
        if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
        {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
        gameEngine.update(); //the main update function
        gameEngine.draw(); //and draw
    }
}
//The message proc function here with all input calling the InputEngine functions to update each key's states

And my GameEngine's Update() would have something like this:

void GameEngine::Update()
{
    switch(gameState)
    {
        case GameState::Startup_Loading:
        {
            //Loading from the disk to the resourceList
            ResourceEngine.Load(2, "logo", "title");

            //Now filling the objects' data before adding to the sceneGraph
            GameObject* gameLogo = new GameObject();
            gameLogo.resourceObject = listObjects[0];
            gameLogo.positionX = 10; //etc

            sceneGraph.add(gameLogo);
            
            gameState = GameState::Startup;
            break;
        }
        case GameState::Startup:
        {
            //displays the logo, counts a time, then changes the gameState to something else
            gameState = GameState::Title_Screen;
            break;
        }
        case GameState::MainGame_Loading:
        {
            resourceList.Clear();
            ResourceEngine.Load(1, "playerCharacter");            

            GameObject* playerCharacter = new GameObject();
            playerCharacter.resourceObject = listObjects[0];
            //fill positions and such
            
            gameState = GameState::MainGame;
            break;
        }
        case GameState::MainGame:
        {
            for(int i = 0; i < sceneGraph.GetVisibleObjectsCount; i++)
            {
                //each object has it's own update, which will check for input and change it's own state and manage it's own animations
                sceneGraph.GetVisibleObject(i).Update();
            }
            break;
        }
    }
}

This is how I think it'll be working, and I'll be coding lots of GameObjects for each game, get enums for flags and change them according to events in game, among other stuff.

This is my first engine/game, and I'd like to ask for opinions, advices and comments of how this can be done better, or if there's room for optimization, if there's anything wrong, etc...

 

I have little to no experience with engines, so any input is highly appreciated!



Sponsor:

#2 CRYP7IK   Members   -  Reputation: 796

Like
4Likes
Like

Posted 14 July 2013 - 06:05 PM

I have a question: Is this supposed to be used in one game? If so this engine seems fine, minus having no alpha which might be on purpose.

 

However if you then tried to adapt it to some other game the GameEngine class leaves much to be desired especially if the update function gets too large with too many different states. You can fix that with a slightly more advanced state machine to look after the states(Which would then be a separate class).

 

I was writing up a StateMachine example class when I remembered about the crazy number of gdnet articles, check these out:

 

http://www.gamedev.net/blog/812/entry-2256166-state-machines-in-games-%E2%80%93-part-1/

http://www.gamedev.net/blog/812/entry-2256167-state-machines-in-games-%E2%80%93-part-2/

http://www.gamedev.net/blog/812/entry-2256173-state-machines-in-games-%E2%80%93-part-3/

http://www.gamedev.net/blog/812/entry-2256179-state-machines-in-games-%E2%80%93-part-4/

http://www.gamedev.net/blog/812/entry-2256190-state-machines-in-games-%E2%80%93-part-5/

 

http://www.gamedev.net/page/resources/_/technical/game-programming/state-machines-in-games-r2982

 

Although, again your engine is very nice and if it works for your purpose don't over-engineer it.

 

 

 


To accomplish great things we must first dream, then visualize, then plan...believe... act! - Alfred A. Montapert
Gold Coast Studio Manager, Lead Programmer and IT Admin at Valhalla Studios Bifrost.

#3 L. Spiro   Crossbones+   -  Reputation: 11939

Like
12Likes
Like

Posted 14 July 2013 - 07:54 PM

//This has info that's needed to draw images in the shader
[Material]
.char[3] diffuse, specular, ambient; //rgb that will be used in the shader
.char* textureData; //the rgba data from an image
.char* normalData; //the normal map from an image
.int textureID; //the textureID that will be filled when it's loaded in the GPU

You would be better off just using a texture pointer directly, or better still a shared pointer to a texture. Textures may have ID’s (and they very well should, to prevent redundant texture loads), but using that ID rather than a direct pointer wastes precious cycles, as you then need to find that texture via binary searches, hash tables, etc.
Additionally, colors should be floats, not char’s, and especially not signed char’s. They should also have 4 components, as that is what you will get inside the shader anyway. As far as the GPU is concerned, float[3] and float[4] consume the same space.

 

//A mesh is the data to draw an object in screen
[Mesh]
.float* vertexes;
.float* normals;
.float* uvs;
.int* vertexIndexes;
.Material material; //A mesh can only have a single material
.MeshType meshType; //this will be an enum that will specify which shader to use, if it's glass, water, or something that need some specific shader effect
.int vboID; //the ID that will be filled when it's loaded in the GPU

Separate the mesh from the low-level components that are used to render it.
A low-level class such as “CVertexBuffer” should have the VBO ID and the vertices, normals, UV’s, etc.
Another low-level class such as “CIndexBuffer” obviously contains the index data.

A mesh is a higher-level class that brings these low-level classes together. The mesh itself should never touch the vertex buffer’s internal data except on loading, and if you do want to mess with the vertex data it should be done through an interface provided by CVertexBuffer.
CVertexBuffer should also not be hard-coded to have normals and UV’s. The only thing a vertex buffer is guaranteed to have is vertex data. I would give you a pass since this is your first time, but think ahead to when you later want to add tangents and bitangents. This is fairly relevant since you already mentioned having normal-map textures. But not every object will have normal maps, so you clearly need to think carefully about a more dynamic vertex-buffer system.
And again, all of that dynamic mess will be entirely handled by the CVertexBuffer class. Its whole purpose is to make it easy for the rest of your engine to use vertex buffers. That is not job of the mesh, so don’t mix responsibilities.
 

//This resource object will contain all data about a single object, such as a "House"
[ResourceObject]
.string name; //a name to identify it, not sure if it'll be needed
.Mesh* meshes; //the resource object can have multiple meshes, such as a "door", "window", "walls", etc

You are somewhat on the right track now, but I wouldn’t call it a “ResourceObject”. That is a very sweeping term that could encompass any form of resource, including models, terrain, textures, BGM, etc.
What you have just described is only a model. A model is a collection of meshes, which is exactly what you have described here.

This hints to the idea that you believe every 3D thing you see on the screen is a mesh or a collection of meshes. They aren’t. Terrain, for example, will use a heightmap of some sort with an LOD method of your choice (I recommend GeoClipmaps), which makes its rendering pipeline almost entirely different from how simple things such as doors and walls are drawn. For that matter, interiors of buildings are also drawn entirely differently so that they can take advantage of PVS’s etc.

I am not saying you need to get so complex with your own engine, especially on your first go, but I want to make sure you understand that in reality, very little of what you see on the screen of any modern 3D game is really just a basic mesh. Those are usually just characters, ammunition/weapons, and various other props. Other things, such as terrain, vegetation, buildings (especially interiors), skyboxes, etc., are all made efficient by utilizing rendering pipelines unique to themselves.

 

//This is the class that will be responsible for loading and unloading objects from disk in the RAM
[ResourceEngine]
.list<ResourceObject> listObjects; //a list of all objects that are currently loaded
.list<Material> listMaterials; //a list of all materials that are currently loaded
.LoadObject(int count, ...); //load X resources in the list, will use "string" to identify them
.LoadMaterial(int count, ...); //same as above, but for materials
.UnloadObject(int objectID); //unload an object from the list
.UnloadMaterial(int materialID); //unload a material from the list

I wouldn’t bring all of these things together into one mega-class. Again: Single Responsibility Principle.
Expanding on the concept I introduced on how diverse the types of objects in your game world ultimately could be, imaging trying to load and construct all of those objects in one place.
It will quickly become a major mess.

Firstly you need to familiarize yourself with Single Responsibility Principle.
In this case, let’s look at the potential diversity of the types of things that could go into your game world and pick separation points.
A model is so different from terrain that models and terrain could each be their own entirely separate modules (or libraries).
Of course they would both build on top of a common foundation (the graphics module/library, where CVertexBuffer and CShader live).
So let’s say that models are entirely their own module. In my engine it builds as its own .lib file, but in keeping things simple for your first time, let’s just say it’s a folder in your source tree. Anything under that folder has one job and one job only: Models.

When you break it down like this it starts to make sense that models should actually just load themselves. Models know what models are. They know what their binary format is. There is no reason to push the responsibility of the loading process onto any other module or class.

If you are worried that the model module is stepping out of its bounds by having direct access to disk, don’t give it direct access to disk. Give it a “stream”.
Now your model module/library is still just doing its job. It doesn’t know anything about hard drives, networks, or otherwise, yet via the abstract interface provided by streams it can actually load its own model data through both of them.

Because of the raw number of resource types, a “ResourceEngine” doesn’t really make sense, but it could find a nest in your design if you do it right. Just make sure you understand that a “resource engine” also has only one responsibility, and it is not the actual loading process for each resource. If anything, it only provides a high-level interface for loading, but passes off the actual loading process to each object it creates.
For example, if you ask it for a model, it will create a new model, prepare a file stream for the model, and let the model use that stream to load itself. It then destroys the model and returns NULL if loading failed, or it returns a shared pointer to that model.

Don’t use ID’s for everything, as, as I already mentioned, it wastes precious cycles.

 

//This is the class responsible for loading the objects in the GPU and manage shaders
[GraphicEngine]
.list<Shader> listShaders; //a list of shaders, not sure if this will be needed
.LoadObject(ResourceObject* object); //load an object in the GPU, will fill the "vboID"
.LoadMaterial(Material* material); //load an material in the GPU, will fill the "textureID"
.Unload(int vboID); //unload a VBO
.Unload(int textureID); //unload a texture
.CompileShaders(string shaderCodes); //this will compile the shader, and fill the list and specify an ID to it, which an object will be able to call it if needed

The graphics engine should provide low-level classes such as CVertexBuffer, CTexture2D, CTextureCube, CIndexBuffer, CShader, etc.
And when I say “provide” I don’t mean that you request these things from the graphics engine, I mean that the graphics module is where these things all live and can be taken by any class at any time. That’s how the model and terrain modules can be totally separate from each other while still sharing a common graphics module.

How high-level a graphics engine can and should be is partly a matter of opinion, but everyone will agree that it needs to be flexible and be equipped to handle the very wide variety of things that it will be used to draw.
That is why it is best to keep the features it provides as low-level as possible.

So a graphics module should basically just be a collection of the low-level parts needed to perform any abstract rendering process. You don’t even need (but you can if you want) to have an interface for activating textures and vertex buffers. CVertexBuffer and CTexture2D can handle these tasks themselves.

The main problem you have here is that the graphic module knows what a “ResourceObject” (a model) is when it should be entirely the other way around.
Because of how different the pipelines are for rendering a standard model vs. GeoClipmap terrain, trying to make a monolithic graphics module that knows about both of these and tries to render them properly all by itself is not only a mess but a violation of Single Responsibility Principle.

Things will go much more smoothly if the graphics module simply provides a set of low-level classes and a small interface for setting render states etc., and then let the models and terrain use these classes directly and set states directly before rendering themselves, by themselves.

Of note, that does not mean you are completely off base by thinking about sending the graphics module a class/structure to let the graphics module do some work of its own, setting states, activating textures, setting vertex buffers, etc.
But you went too high-level with it.
The graphics module can receive a structure with state settings (blending mode, culling mode, etc.) and pointers to textures to activate, but that is a mid-level feature and only acts as a helper function.
In other words, it is not doing anything special that models and terrain could not do themselves manually. Setting blending modes and textures could still be done manually, and even if you let the graphics module do it for you, it’s still just using its own already-exposed low-level API for it.

 

.GetCurrentTime(); //returns the float of the currentTime

It most certainly does not.
I already wrote a sub-chapter about this for my upcoming book, but I don’t have it with me right now.
I will post it here when I get home.

 

//This class will keep track of the game's ACTIONs and playerInput
[InputEngine]
.keyboard[256]; //For each key in the keyboard, has a bool to check if it's pressed or not
.mouse[8]; //Same for mouse, and mousewheel
//I'm thinking in moving these two to OS, since mobiles have different controls and it'll depend on the target platform the game will be
.list<Actions> listActions; //A list of all actions in the game, loaded from a resourceFile as strings such as "PLAYER_MOVERIGHT"
.list<Actions> playerInput; //A list of all actions input by the player, with timestamps

Handling input is much more complex than this, but I don’t think you would be able to handle its full intricacies on your first attempt.
This is one of those cases where I can only say, “You’re doing it wrong,” but will let you learn from personal experience. You will need that experience to understand why this is wrong even if I told you, and, besides, it would require a major restructure of your engine.
But I mention this is wrong not to just point and laugh, but to give you a head’s up on one potential place where you could do a lot better next time. If I said nothing at all you might be completely clueless on this part for a few iterations of your engine.

 

//This is an actual on-screen object, it uses a resourceObject data and there can be multiple of them on screen
[GameObject]
.ResourceObject* resourceObject; //the data of this object
.float positionX, positionY, positionZ;
.float directionX, directionY, directionZ;
.float speedX, speedY, speedZ;
//other variables needed depending on the game

For your first time, I am not going to nit-pick here, but my recommendations would be:
#1: Use vectors, not arrays of floats. Adding velocities and positions becomes a lot easier when you use overloaded CVector operators, and it just makes your code easier to maintain (especially in avoiding copy-paste errors and accidentally increasing X, Y, and Y, instead of X, Y, and Z).
#2: You have a chance to practice Single Responsibility Principle here. Every type of entity in your game, from the characters to the guns to the camera itself, will have a position, scale, and rotation. Sounds as though it is a good responsibility for a COrientation class. Another thing I have written for the book, which I can post when I get home, is just such a class.

 

//This is the main class where most of the code will be

Then you have not distributed enough of your code to other classes/modules.
To be frank, in my engine, this is probably the smallest class among the major non-helper type classes.
 

[GameEngine]
.SceneGraph sceneGraph; //basically a list of all GameObjects currently on the level

Just to point out: A scene graph is not a list, it is a hierarchy. It won’t be a scene graph unless you have a parent/child relationship in place (and I suggest you do; it really helps when you want that sword to stay in the guy’s hand).
But don’t misunderstand. In addition to that hierarchical relationship you do still need a flat list of objects for easy traversal.

 

.Initialize(); //empty function that's called on startup, will be filled differently for each game
.Update(); //empty function that's called every frame
.Draw(); //function that gets a list of objects that "can be seen" from the sceneGraph, and draws them

General Game/Engine Structure
Passing Data Between Game States

 

And my GameEngine's Update() would have something like this:

void GameEngine::Update()
{
    switch(gameState)
    {
        case GameState::Startup_Loading:
        {
        }
        case GameState::Startup:
        {
        }
        case GameState::MainGame_Loading:
        {
        }
        case GameState::MainGame:
        {
        }
    }
}

Once again: General Game/Engine Structure


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#4 L. Spiro   Crossbones+   -  Reputation: 11939

Like
8Likes
Like

Posted 14 July 2013 - 11:35 PM

I am home and can post the promised topics from my book.

Note #1: I will not take a lot of time fixing the formatting, and images (and figures) will of course be missing.  I don’t think you need them for your first attempt at an engine anyway.

Note #2: I want a clear separation between the topics I am posting from my book, including from this text as well.  Each following post is a section of the book, and the separation between the posts makes the separation between the topics clear.  It is not double-posting or spamming a topic, even though I will end up having 4 posts in a row in the same topic.  I could have 1 gargantuan topic that no one would fully read, or, like with any long text, I can break it into multiple digestable parts.

Note #3: This was written for iOS, so the OS functions I use to get time will not work on Windows, but it is fairly straight-forward how to convert the code to have the same functionality.

 

 

L. Spiro


Edited by L. Spiro, 14 July 2013 - 11:37 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#5 L. Spiro   Crossbones+   -  Reputation: 11939

Like
9Likes
Like

Posted 14 July 2013 - 11:37 PM

Time                                           

Keeping track of time is important for any game, and it can often be difficult to do just right, as there are many small nuances to handle that, when done improperly, can lead to significant bugs that can be very difficult to trace and fix.  Contributing to this is the fact that, at face value, keeping track of time seems to be a very simple task, luring many into a false sense of security when implementing their timers.  This section will outline the important features a game timer should have, discuss some common pitfalls when implementing game timers, and provide a solid game timer implementation.

Important Features           

Before anything else, it is important to outline what features a game timer should have and what it should do.

▪     Drift must be minimized.  “Drift” refers to the accumulation of small amounts of error in time that cause the game’s timer to drift slowly away from what the actual game time should be.

▪     It must have some mechanism for pausing.  This will be achieved by keeping a second time accumulator that only increments when the game is not paused.

▪     It should be in a resolution higher than milliseconds.  Even though iOS devices are capped at rendering at 60 times per second, that still only leaves an accuracy of 16 (truncated) milliseconds per frame.  The framework we will create will use a resolution of microseconds.

▪     It should have features to accommodate stepping through the game logic and the rendering logic at different rates, effectively decoupling them.

o    Note that this type of decoupling is not related to multi-threaded rendering—moving the renderer onto its own thread is an entirely different subject.

▪     It must provide an interface for making it easy to get the delta time since the last frame in both microseconds and as a floating-point value.  Floating-point values are better for physics simulations while microseconds are better for things that need to accumulate how much time has passed accurately and with as little drift as possible.

o    Again, a focus on avoiding drift will be an important factor in how this is actually implemented.

▪     Overflow of the timers should be impractical.  “Overflow” refers to the integral counter used to tell time accumulating beyond the limits of its integral form (as imposed by computer science) and wrapping back around to or past 0, which would trick sub-systems into thinking that a massive amount of time has passed since the last update (due to the unsigned nature of the integrals we will be using) which will effectively freeze most systems and cause unpredictable havoc with others.  Given enough time, any timer will eventually overflow, so our objective is to make that it takes an unreasonable time to do so, for example over 100 years.

Common Pitfalls

If at first implementing a timer seems straightforward, reading over the previous section’s list of important features may make it more obvious that there is a lot of room for error, and these errors can be very hard to detect and trace, especially if they only occur over a long period of time.

▪     Using 32-bit floating-point values to store time is one of the biggest follies an inexperienced programmer will make, as due to its decreasing precision with higher numbers, your game will become noticeably choppier in a short amount of time, from somewhat choppy after a few hours to heavily choppy after a few days.
More advanced readers may instead use 64-bit floating-point values to accumulate time, but this has more subtle flaws, particularly related to drift.  Small floating-point errors will accumulate over time, and while it may not be noticeable for a while it is also just as easy to accumulate time in the system’s native resolution with an unsigned 64-bit integer and eliminate drift to the best of the hardware’s capability.  64-bit doubles can still be used, but they should be re-derived each frame instead of accumulated directly.

▪     Updating the time more than once per game cycle, or in the most extreme case always getting the current time whenever you need it for any calculation is another common pitfall.  Related time systems should be updated at specific intervals and only at those intervals.  Examples of related time systems are those used for the sound thread, input timestamps, and the game’s primary timer.
In the event of input events, the timer for that system will be updated on each event it catches.  For the sound thread, the timer will be updated once each time the sound manager updates.  All sounds in it will then be advanced by that same amount of time to prevent any of them from updating faster or slower than the other sounds in the manager.
As for the game’s main timer, this should be updated once and only once on each game cycle, or game loop.  Imagine the worst-case scenario in which the time is read from the system every time “time” is needed in an equation.  If 2 balls fall from the same height at the same time, they should fall side-by-side.  However, if the delta time—that is the time since the last update of each ball—is obtained by always checking the current system time, any hiccups in the physics system could trick one ball into believing it has fallen for a longer time on that frame and sneak ahead of the other ball.

▪     Using milliseconds instead of microseconds is a minor pitfall and may not show a significant change on iOS devices, but using microseconds is just as easy and just more accurate.  As we will be using a macro for the resolution of our timers, however, any timer resolution will be easy to test in the end.

▪     Using a fixed framerate (such as always updating by 16.6 milliseconds, which will show slowdown when too many objects are on the screen etc.) is itself not a pitfall—it is entirely appropriate for some single-player games.  The pitfall is when you make assumptions based on the framerate, especially when those assumptions impact other parts of the engine.  There is an appropriate way to get a fixed framerate related only to the timer system and should be preferred over any assumptions that may impact other parts of the game such as the physics system.  This method will be explained later.

Implementation: Basics                  

With the features went want to have and the pitfalls we want to avoid in mind we can begin our implementation of a timer class.  The file will be part of the TKFoundation library and will be named TKFTime.  The class declaration begins with the basic members we need to keep track of time without drift and a wrapper function for getting the current system time in its own native resolution.  This is the time we will be accumulating.

 

Listing 2.3  TKFTime.h

namespace tkf {

    class CTime {
    public :
        CTime();

       
        // == Functions.
        /** Gets the real system time in its own resolution.
         *    Not to be used for any reason except random-number
         *    seeding and internal use.  Never use in game code! */
        u64                        GetRealTime() const;
   
    protected :
        // == Members.
        /** The time resolution. */
        u64                        m_f64Resolution;
       
        /** The time accumulator which always starts at 0. */
        u64                        m_u64CurTime;
       
        /** The time of the last update. */
        u64                        m_u64LastTime;
       
        /** The last system time recorded. */
        u64                        m_u64LastRealTime;
    };
 

}      // namespace tkf

TKFTime.cpp contains only the important includes, the constructor, and the implementation of GetRealTime().

Listing 2.4  TKFTime.cpp

#include "TKFTime.h"

#include <mach/mach.h>
#include <mach/mach_time.h>
   
namespace tkf {
   
    CTime::CTime() :
        m_u64CurTime( 0ULL ),
        m_u64LastTime( 0ULL ) {
        // Get the resolution.
        mach_timebase_info_data_t mtidTimeData;
        ::mach_timebase_info( &mtidTimeData );
        m_f64Resolution = static_cast<f64>(mtidTimeData.numer) /
            static_cast<f64>(mtidTimeData.denom) / 1000.0;
             
        m_u64LastRealTime = GetRealTime();
    }
   
    // == Functions.
    /** Gets the real system time in its own resolution.
     *    Not to be used for any reason except random-number
     *    seeding and internal use.  Never use in game code! */
    u64 CTime::GetRealTime() const {
        return ::mach_absolute_time();
    }
   

}    // namespace tkf

mach_absolute_time() returns the number of ticks since the device has been turned on and will be in a resolution native to the device.  The values returned by mach_timebase_info() provide the numerator and denominator necessary to convert the system ticks into nanoseconds.  Note that while we are using a 64-bit floating-point value for the conversion from system ticks to microseconds, we are not using it for any accumulation purposes, and as such it does not contribute to drift.  Because it is used in multiplication later it has been divided by 1,000.0 so that it converts to microseconds rather than nanoseconds.

The next step is to add accumulation methods.  This will be split into 2 functions: One that allows the number of system ticks by which to update and one that calls the former after actually determining how many ticks have passed since the last update.  By splitting the update into these 2 functions we can keep other timers as slaves to another so that as the main timer updates, the amount by which it updates can be cascaded down to other timers so that they all update by the same amount.  This will be important later.

Listing 2.5  TKFTime.cpp

    /** Calculates how many system ticks have passed

     *    since the last update and calls UpdateBy()
     *    with that value. */
    void CTime::Update( bool _bUpdateVirtuals ) {
        u64 u64TimeNow = GetRealTime();
        u64 u64Delta = u64TimeNow - m_u64LastRealTime;
        m_u64LastRealTime = u64TimeNow;
       
        UpdateBy( u64Delta, _bUpdateVirtuals );
    }
   
    /** Updates by the given number of system ticks. */
    void CTime::UpdateBy( u64 _u64Ticks, bool _bUpdateVirtuals ) {
        m_u64LastTime = m_u64CurTime;
        m_u64CurTime += _u64Ticks;

    }

Accumulating time in system resolution is simple.  Note that inside of Update() the delta was calculated with a simple subtraction without any special-case handling for the wrap-around of the system tick counter.  Due to how machines handle integer math, this is correct, as the wrap-around case will still result in the correct delta time.

Our timer will need to return the current microseconds at any given time so it is best to calculate that value up-front.  Doing so is simple as it is just a matter of a single multiplication with the timer resolution.  The CTime class will receive a new member called m_u64CurMicros and UpdateBy() will be modified.

Listing 2.6  TKFTime.cpp

    /** Updates by the given number of system ticks. */

    void CTime::UpdateBy( u64 _u64Ticks, bool _bUpdateVirtuals ) {
        m_u64LastTime = m_u64CurTime;
        m_u64CurTime += _u64Ticks;
       
        m_u64CurMicros = static_cast<u64>(m_u64CurTime * m_f64Resolution);

    }

Next, we need microsecond values for the current time and for how much time has passed.  Handling this in a way that avoids drift is trickier.  Let’s consider only the case of determining the number of microseconds since the last frame, as a u64 type.  Inside of UpdateBy() we could simply convert _u64Ticks to microseconds, since that represents the amount of time that has passed since the last update.  However this always truncates and we never get back any of those truncates microseconds, which is exactly what causes drift.  Instead, microseconds since the last update must be calculated based off the raw ticks of the last update subtracted from the raw current ticks (after the update).  This will add a member to CTime called m_u64DeltaMicros, and with it UpdateBy() must be modified.

Listing 2.7  TKFTime.cpp

    /** Updates by the given number of system ticks. */

    void CTime::UpdateBy( u64 _u64Ticks, bool _bUpdateVirtuals ) {
        m_u64LastTime = m_u64CurTime;
        m_u64CurTime += _u64Ticks;
       
        u64 u64LastMicros = m_u64CurMicros;
        m_u64CurMicros = static_cast<u64>(m_u64CurTime * m_f64Resolution);
        m_u64DeltaMicros = m_u64CurMicros - u64LastMicros;

    }

m_u64CurMicros and m_u64DeltaMicros can get returned by inlined methods GetCurMicros() and GetDeltaMicros(), not shown here for brevity.  Objects that want to accumulate time should accumulate with GetDeltaMicros().  Another time-related attribute that is commonly needed is the amount of time that has passed in seconds, often in 32-bit floating-point format since it is generally a small value.  This value will also be updated once inside UpdateBy() and obtained by an inlined accessor method called GetDeltaTime().

Listing 2.8  TKFTime.cpp

        …

        m_u64DeltaMicros = m_u64CurMicros - u64LastMicros;
        m_f32DeltaTime = m_u64DeltaMicros *

            static_cast<f32>(1.0 / 1000000.0);

Implementation: Pause

The time class updates in such a way as to avoid drift and overflow, and buffers common time values for fast access via inlined accessors.  The next feature to add is pausing.  This is very simple and largely a repeat of previous updating code.  Pausing is implemented via a second set of accumulators called “virtual” accumulators.  Virtual ticks will be optionally accumulated on each update, and the rest (virtual current microseconds, virtual delta microseconds, and virtual delta time) will be updated based off that in the same way the normal deltas and time values are updated.  A new set of “Virt” members are added to CTime and updated inside of UpdateBy().

Listing 2.9  TKFTime.cpp

        …

        if ( _bUpdateVirtuals ) {
            m_u64VirtCurTime += _u64Ticks;
            u64 u64VirtLastMicros = m_u64VirtCurMicros;
            m_u64VirtCurMicros = static_cast<u64>(m_u64VirtCurTime
                * m_f64Resolution);
            m_u64VirtDeltaMicros = m_u64VirtCurMicros - u64VirtLastMicros;
            m_f32VirtDeltaTime = m_u64VirtDeltaMicros *
                static_cast<f32>(1.0 / 1000000.0);

        }

Additionally, inlined accessors are added to access these virtual timers (not shown here for brevity).  When objects can be paused, they should always exclusively use the virtual set of time values.  The standard time values are incremented every frame and can be used to animate things that never pause such as menus.  Meanwhile, the game world will update based on the virtual set of time values and all game objects will stop moving when the game is paused.

Implementation: Fixed Time-Stepping

Fixed time-stepping is itself sometimes a difficult concept to grasp, and proper implementations are often difficult to get right.  Fixed time-stepping refers to decoupling the logical updates from the rendering (this form of decoupling is unrelated to multi-threaded rendering), executing logical updates only after a certain fixed amount of time has passed while rendering as often as possible, regardless of how much time has passed.  There are several important reasons why this is done.  Firstly, logical updates are where physics, AI, scripts, etc., are run.  By spacing these updates out over a few frames rather than every frame CPU usage is reduced and the overall framerate can increase.  Secondly, and most importantly, physics simulations require a constant update time each time they are run in order to avoid exploding scenes and other buggy behavior.  Finally, fixed time steps are essential for keeping synchronization over network play as close as possible.  While it is still possible for clients to lose sync and there does need to be a mechanism to re-sync clients anyway, things are greatly simplified when non-interactive parts of the scene are run by fixed time steps, which all but ensures they will remain in sync for all clients.

Conceptually it can be viewed as shown in Figure 2-7.  Logical updates, in a perfect world, would happen at perfectly even intervals.  Renders happen as fast as possible, on every game loop or game cycle, which means there may be longer delays between some renders and short delays between others.

 

Figure 2-7

Unfortunately the logical updates and renders are executed on the same thread, and since renders happen every loop it is impossible to actually place a logical update anywhere except on the same intervals as the renders.  That is, you cannot have a render every loop and somehow insert a logical update half-way into the loop.  It turns out, however, that this is nothing but a conceptual problem.  Inside of the game, actual universal time doesn’t matter.  Imagine scenario 1 in which a render is delayed for 3 seconds and logical updates occur every second.  In a perfect world, we would wait 1 second, perform 1 logical update, wait another second and update logic, wait 1 more second and update logic, and then render.  But to the player who is looking at the screen, this process yields exactly the same result as if we waited 3 seconds, then updated the logic 3 times quickly, each update by 1 second, and then rendered.  The simulation has, in either case, moved forward by 3 seconds before the render, and the render itself will be identical in both scenarios.  In other words, the logical updates don’t actually need to occur at the correct times in real time—the only thing that matters is if the same number of logical updates have been executed before the render.  In practice, it looks like Figure 2-8.

 

Figure 2-8

In Figure 2-8, the logical updates occur at the same time as the renders because they are on the same thread and can only execute inside of a single game loop.  The original spacing of the logical updates is shown in grey.  Instead of executing the logical updates at arbitrary times within the game loop, we simply change the number of time the logical updates occur on each game loop.  Notice that the second logical update was pushed back a bit due to some lag, but in total 2 logical updates took place before the second render, which means the game simulation had advanced by, let’s say 2 seconds (borrowing from our previous example), which means the render still depicts the game state the same way it would have had we been able to execute the logical update at literal 1-second intervals.

Notice also that if the game loop and render execute too quickly, sometimes too little time passes to justify another logical update.  In this case 0 logical updates were issued on that game loop and the loop goes straight to rendering.  This saves on CPU usage.  Although not shown here, it can be extrapolated that 0 and 1 are not the only number of times that might be necessary to run the logical update in a single game loop.  If there is a strong lag and the delay between frames increases enough, it may be necessary to run 2, 3, or more logical updates in a single loop.  Our CTime class is going to determine for us how many times to update the logic on each game loop.  Actually performing the updates will be covered in the next chapter.  Additionally, an astute reader may have realized that in these simple examples it doesn’t make sense to render unless a logical update has taken place because you would be rendering the exact same scene with all the objects in their same locations etc.  While this is true for the simple example provided here, there is also a solution to this involving interpolation.  Again, this is handled at a higher level in the system and will be explained in detail in the next chapter, but it is worth mentioning now because in order to perform that interpolation we will need the CTime class to give us an extra value back when it calculates how many logical updates to perform.

The main question we want the CTime class to answer is how many times we need to update the logical loop each tick.  For this, a new method is added to CTime called GetFixedStepUpdateCountFromTicks().

Listing 2.10  TKFTime.cpp

    /** Gets the the number logical updates necessary for a fixed time

     *    step of the given number of system ticks as well as the fraction
     *    between logical updates where the simulation will be after
     *    updating the returned number of times. */
    u32 CTime::GetFixedStepUpdateCountFromTicks( u64 _u64Ticks,
        f32 &_f32Ratio, u64 &_u64RemainderTicks ) {
        u64 u64TimeNow = GetRealTime();
        u64 u64Delta = u64TimeNow - m_u64LastRealTime;
      
        u32 u32Count = static_cast<u32>(u64Delta / _u64Ticks);
        u64 u64AddMe = u32Count * _u64Ticks;
        m_u64LastRealTime += u64AddMe;
        _u64RemainderTicks = u64Delta - u64AddMe;
       
        _f32Ratio = static_cast<f32>(static_cast<f64>(u64AddMe) /
            static_cast<f64>(_u64Ticks));
        return u32Count;

    }

Notice that since GetRealTime() is called directly this should not be used in conjunction with Update().  This utility function is meant to be used with UpdateBy().  Notice that after getting the real time it doesn’t directly copy that into m_u64LastRealTime.  If it did, then the next time this is called on the next frame the delta would not include the previous frame’s time.  If the deltas don’t accumulate across frames, the proper number of logical updates per frame can’t be calculated—remember that it may take several frames before a logical update is needed.  Next, notice that m_u64LastRealTime is modified by adding the count multiplied by the ticks.  The CTime class will expect you to call UpdateBy() with the same number of ticks, u32Count times.  Again, this is to remove drift.  While we will lose a few ticks to rounded off fractions now, they will be regained on later frames.  Finally, when the logical update has executed u32Count number of times, each time adding _u64Ticks to its current simulation time, the logical update’s time will be a tick that is a multiple of _u64Ticks but the real game time will be somewhere between that time and the next logical update (the logical update’s time + _u64Ticks).  _f32Ratio is calculated to represent the fraction of the real time between logical updates.  This will be useful in the next chapter.  The actual number of ticks between logical ticks is stored in the return-by-reference parameter _u64RemainderTicks, which will later be needed to update the render time for each cycle.

 

 

L. Spiro


Edited by L. Spiro, 15 July 2013 - 12:52 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#6 L. Spiro   Crossbones+   -  Reputation: 11939

Like
7Likes
Like

Posted 14 July 2013 - 11:39 PM

Orientation

Objects in the 3D world of your game will need some representation of their orientation.  In geometry, orientation refers only to the rotational aspect of a body, but in video games orientation is often defined as their translations, scales, and rotations combined.  While the representations for translations and scales are straightforward (3-dimensional vectors), representing rotation is often a fuzzy area that is easy to get wrong even though there is no single best way to get it right.  One way that inexperienced programmers will often try is to use Euler angles representing how far left/right and up/down the object is facing.  This is especially often used on players and cameras, exemplifying another common mistake in which players and cameras have a different orientation system from the other game objects.  This makes it easy to modify rotations but makes the run-time slow, as updating objects over time requires an expensive sine and cosine each frame for each object.

It is generally best to use vectors for the rotation part of an orientation, but then the question becomes a matter of which vectors to store.  Some implementations save space by storing only the Forward and Up vectors and determine the Right vector when needed with a slight penalty to performance.  Because this book focuses on run-time performance and because most devices these days, including iOS devices, the primary focus of this book, have substantial memory, our implementation will also store the Right vector.  The overhead in updating objects’ rotations becomes greater, but most objects in a game scene will often be static with only a few objects updating at all, and fewer still updating frequently.  On the other hand, time is saved when generating matrices for rendering since the Right vector does not need to be re-determined each time.

The shell for the COrientation class is fairly straightforward.  It contains each of the properties an orientation needs—position, scale, and rotation—along with a dirty flag to let us know what, if anything, has been updated.  The dirty flag allows us to rebuild matrices only if they actually change, saving a lot of computational expense on objects that rarely or never move.

Listing 2.13  TKFOrientation.h

#include "../TKFFnd.h"

#include "../Matrix/TKFMatrix44.h"

#include "../Vector/TKFVector3.h"

 

namespace tkf {

 

    class COrientation {

    public :

        COrientation();

       

       

        // == Functions.

        /** Gets/sets the position. */

        inline CVector3 &       Pos() {

            m_u32Dirty |= TKF_DF_POS;

            return m_vPos;

        }       

        /** Gets the position. */

        inline const CVector3 & Pos() const {

            return m_vPos;

        }

       

        /** Gets/sets the scale. */

        inline CVector3 &       Scale() {

            m_u32Dirty |= TKF_DF_SCALE;

            return m_vScale;

        }

        /** Gets the scale. */

        inline const CVector3 & Scale() const {

            return m_vScale;

        }

        /** Checks the dirty flag. */

        inline bool             IsDirty() const {

            return m_u32Dirty != 0;

        }

       

    protected :

        // == Enumerations.

        /** Dirty flags. */

        enum TKF_DIRTY_FLAGS {

            TKF_DF_POS          = (1 << 0),

            TKF_DF_SCALE        = (1 << 1),

            TKF_DF_ROT          = (1 << 2),

            TKF_DF_NORMALIZE    = (1 << 3),

        };

       

       

        // == Members.

        /** Position. */

        CVector3                m_vPos;

       

        /** Scale. */

        CVector3                m_vScale;

       

        /** Right vector. */

        CVector3                m_vRight;

       

        /** Up vector. */

        CVector3                m_vUp;

       

        /** Forward vector. */

        CVector3                m_vForward;

       

        /** Dirty flag. */

        u32                     m_u32Dirty;

    };

 

}    // namespace tkf

Our use of const-correctness allows us to dirty the position and scale only if they are accessed for writing and not for reading.  The dirty flag also has a bit for normalization of the rotations, another expensive operation we wish to avoid if possible.

So far the COrientation class is very straightforward.  Each frame, objects’ orientations will be updated and then a matrix will be built for that object to pass to the renderer.  Building the matrix is also very straightforward, and is as follows.

Listing 2.14  TKFOrientation.cpp

    /** Creates the matrix representing this orientation. */

    void COrientation::CreateMatrix( CMatrix44 &_mMatrix ) const {

        _mMatrix._11 = m_vRight.x * m_vScale.x;

        _mMatrix._12 = m_vRight.y * m_vScale.x;

        _mMatrix._13 = m_vRight.z * m_vScale.x;

        _mMatrix._14 = 0.0f;

       

        _mMatrix._21 = m_vUp.x * m_vScale.y;

        _mMatrix._22 = m_vUp.y * m_vScale.y;

        _mMatrix._23 = m_vUp.z * m_vScale.y;

        _mMatrix._24 = 0.0f;

              

        _mMatrix._31 = m_vForward.x * m_vScale.z;

        _mMatrix._32 = m_vForward.y * m_vScale.z;

        _mMatrix._33 = m_vForward.z * m_vScale.z;

        _mMatrix._34 = 0.0f;

       

        _mMatrix._41 = m_vPos.x;

        _mMatrix._42 = m_vPos.y;

        _mMatrix._43 = m_vPos.z;

        _mMatrix._44 = 1.0f;

    }

Additionally, a function for updating a matrix only if the dirty flag is set is added.  It clears the dirty flag after updating the matrix.

Listing 2.15  TKFOrientation.cpp

    /** Updates the given matrix only if the dirty flag is set. */

    inline void COrientation::UpdateMatrix( CMatrix44 &_mMatrix ) const {

        if ( IsDirty() ) {

            Normalize();       // Normalizes rotation vectors if flag set.

            CreateMatrix( _mMatrix );

            m_u32Dirty = 0UL;

        }

    }

Working with and updating rotations is a bit more involved but mainly revolves around keeping the 3 vectors orthogonal, which in itself is actually fairly trivial.  The most important features for the interface to have are those that allow an object to point at another object, to set the rotation directly, and to rotate around an axis by a given amount.  Just as it is easy to create a matrix with the rotation vectors as shown in the CreateMatrix() method it is easy to extract them, so we will start with the simplest methods first.

Listing 2.16  TKFOrientation.cpp

    /** Sets the rotation given a matrix. */

    void COrientation::SetRot( const CMatrix44 &_mMatrix ) {

        m_vRight = _mMatrix.GetRow( 0 );

        m_vUp = _mMatrix.GetRow( 1 );

        m_vForward = _mMatrix.GetRow( 2 );

        m_u32Dirty |= TKF_DF_ROT | TKF_DF_NORMALIZE;

    }

Setting the rotation via a forward and up vector, like in common “look at” functions, is also straightforward.

Listing 2.17  TKFOrientation.cpp

    /** Sets the rotation given a a forward and up vector. */

    void COrientation::SetRot( const CVector3 &_vForward,
        const CVector3 &_vUp ) {

        m_vForward = _vForward;

        m_vRight = _vUp % _vForward;

        m_vUp = _vForward % m_vRight;

        m_u32Dirty |= TKF_DF_ROT | TKF_DF_NORMALIZE;

    }

Adding rotations is only slightly more difficult but it is an important foundation for other routines, such as the one we will later add to rotate around an arbitrary axis.  The most useful routine for our purposes will be to add a rotation given by a matrix.  The code looks daunting, but it’s not too complicated.

Listing 2.18  TKFOrientation.cpp

    /** Adds a rotation from a matrix. */

    void COrientation::AddRot( const CMatrix44 &_mMatrix ) {

        m_vRight = CVector3(

            m_vRight * CVector3( _mMatrix._11, _mMatrix._21, _mMatrix._31 ),

            m_vRight * CVector3( _mMatrix._12, _mMatrix._22, _mMatrix._32 ),

            m_vRight * CVector3( _mMatrix._13, _mMatrix._23, _mMatrix._33 ) );

      

        m_vForward = CVector3(

            m_vForward * CVector3( _mMatrix._11, _mMatrix._21, _mMatrix._31 ),

            m_vForward * CVector3( _mMatrix._12, _mMatrix._22, _mMatrix._32 ),

            m_vForward * CVector3( _mMatrix._13, _mMatrix._23, _mMatrix._33 ) );

 

        m_vUp = CVector3(

            m_vUp * CVector3( _mMatrix._11, _mMatrix._21, _mMatrix._31 ),

            m_vUp * CVector3( _mMatrix._12, _mMatrix._22, _mMatrix._32 ),

            m_vUp * CVector3( _mMatrix._13, _mMatrix._23, _mMatrix._33 ) );

 

        m_u32Dirty |= TKF_DF_ROT | TKF_DF_NORMALIZE;

    }

With this method in place, rotating around an arbitrary axis is fairly simple.  We simply construct an axis-rotation matrix and pass it to this method.

Listing 2.19  TKFOrientation.cpp

    /** Rotates by the given amount around the given axis. */

    void COrientation::RotateAxis( const CVector3 &_vAxis,
        f32 _f32Radians ) {

        CMatrix44 mRot;

        AddRot( mRot.RotationAxis( _vAxis, _f32Radians ) );

    }

The code for CMatrix44::RotationAxis() is shown here for completeness.

Listing 2.19  TKFMatrix44.cpp

    /** Builds a matrix representing a rotation around an arbitrary axis. */

    CMatrix44 & CMatrix44::RotationAxis( const CVector3 &_vAxis,
        f32 _f32Radians ) {

        f32 fS = ::sinf( _f32Radians );

        f32 fC = ::cosf( _f32Radians );

 

        f32 fT = 1.0f - fC;

        register f32 fTX = fT * _vAxis.x;

        register f32 fTY = fT * _vAxis.y;

        register f32 fTZ = fT * _vAxis.z;

        f32 fSX = fS * _vAxis.x;

        f32 fSY = fS * _vAxis.y;

        f32 fSZ = fS * _vAxis.z;

 

        _11 = fTX * _vAxis.x + fC;

        _12 = fTX * _vAxis.y + fSZ;

        _13 = fTX * _vAxis.z - fSY;

        _14 = 0.0f;

 

        _21 = fTY * _vAxis.x - fSZ;

        _22 = fTY * _vAxis.y + fC;

        _23 = fTY * _vAxis.z + fSX;

        _24 = 0.0f;

 

        _31 = fTZ * _vAxis.x + fSY;

        _32 = fTZ * _vAxis.y - fSX;

        _33 = fTZ * _vAxis.z + fC;

        _34 = 0.0f;

 

        _41 = 0.0f;

        _42 = 0.0f;

        _43 = 0.0f;

        _44 = 1.0f;

 

        return (*this);

    }

These methods give the orientation class a solid foundation for use within the framework.  All entities in our game world, including the camera, will make use of this utility class.

 

 

L. Spiro


Edited by L. Spiro, 14 July 2013 - 11:52 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#7 Danicco   Members   -  Reputation: 449

Like
0Likes
Like

Posted 15 July 2013 - 02:15 AM

I have a question: Is this supposed to be used in one game? If so this engine seems fine, minus having no alpha which might be on purpose.

 

It's my first engine, and I hope to start developing all my games with it.

By alpha you mean transparency? I forgot to add it to the texture/object.

 

Thank you for the links, that's going to be useful for flag events since I think there's going to be a lot of them until I finish a game and I was looking for some better way to manage it, besides the states.

 

 

You would be better off just using a texture pointer directly, or better still a shared pointer to a texture. Textures may have ID’s (and they very well should, to prevent redundant texture loads), but using that ID rather than a direct pointer wastes precious cycles, as you then need to find that texture via binary searches, hash tables, etc.

 

 

Sorry but I didn't understand it, what do you mean by not using the ID, but the pointer instead?

ResourceObject will have a pointer to a Material which has the textureID, that's the unsigned int that gets the value from the glGenTextures(count, unsigned int*) function, and I have that so I can keep track if the texture's been loaded in the RAM (has a value > 0) or not (any value > 0), and when I have to draw I call glBindTexture(GL_TEXTURE_2D, object->material.textureID).

Is there a better way to do this?

 

 

 

Those are usually just characters, ammunition/weapons, and various other props. Other things, such as terrain, vegetation, buildings (especially interiors), skyboxes, etc., are all made efficient by utilizing rendering pipelines unique to themselves.

 

 

 

Oh, I totally forgot about those... would creating a specific object for each of these types be the best solution? Something like a TerrainObject which doesn't need to have a mesh, but materials only and the height data?

 

 

 

When you break it down like this it starts to make sense that models should actually just load themselves. Models know what models are. They know what their binary format is. There is no reason to push the responsibility of the loading process onto any other module or class.

 

 

Wouldn't this make it harder for different games? I was going for the ResourceEngine with the idea that I'll be building my resource binary files from another program, my map editor, which will probably be different for each game, and I'd only have code to writing/reading from disk in that class so it's easier to change for the game I'm writing.

 

 

 

The main problem you have here is that the graphic module knows what a “ResourceObject” (a model) is when it should be entirely the other way around.

 

 

I remember that at first, my idea was to have the models load & draw themselves, they'd have access to the draw functions, but I choose to have a single class responsible for all the drawing and the models only being objects with their data stored so it's easier to change the draw functions if I ever need it.

I'm using OpenGL for now, so all the OpenGL functions are in this GraphicEngine class and I just need to know how should I display an object's data on screen, and if I were to change to Direct3D or something else specific (mobiles that don't use OpenGL) I would only have to rewrite this part.

 

I don't know if this ever happens, but is there a better way to deal with this?

 

 

 

This is one of those cases where I can only say, “You’re doing it wrong,” but will let you learn from personal experience. You will need that experience to understand why this is wrong even if I told you, and, besides, it would require a major restructure of your engine.

 

 

Oh but that's exactly what I'm trying to avoid... I'm fine with having to rewrite a lot of my engine code as I get more experienced, but I'm trying to get at least the basics (and properly understand it) so I don't go fully bad right off the start. I had another post about handling player input and that's what I could get from it, I've seen some other posts and tutorials but it's still... very "high level", there's much concept and little to no code or explanations that I can understand so I'm quite lost in this part.

 

I'm sorry about the questions, I appreciate very much these replies, but I'd just like to ask the "why"s to understand why something is better than another and such.

 

Thank you very much for the information, I'm still reading the last 2 topics on Time and Orientation, and everything has been a great help so far!



#8 Norman Barrows   Crossbones+   -  Reputation: 1835

Like
1Likes
Like

Posted 15 July 2013 - 07:51 AM


or it returns a shared pointer to that model.

Don’t use ID’s for everything, as, as I already mentioned, it wastes precious cycles.

 

what if the ID is the offset into an array of shared texture pointers?


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#9 BGB   Crossbones+   -  Reputation: 1545

Like
0Likes
Like

Posted 15 July 2013 - 01:11 PM

 


or it returns a shared pointer to that model.

Don’t use ID’s for everything, as, as I already mentioned, it wastes precious cycles.
 

 

what if the ID is the offset into an array of shared texture pointers?

 

 

this is pretty much what my engine is doing.

typically, integer values are used.

 

a person could probably even get along ok with a short, working under the assumption that it is unlikely there will ever be more than 32k textures.

 

 

in a few cases, I have used bytes for color and normal arrays, mostly because in some places RAM use is an issue (*1), and using bytes I only need about 1/4 as much space. this mostly then just leaves floats for XYZ and ST coordinate arrays.

 

*1: my engine currently is operating fairly close to the limits of a 32-bit address space, so involves some amount of "trying to avoid wasting memory where possible".

 

this is mostly done with things like the voxel terrain geometry, though ironically the geometry for the terrain tends to be a fair bit smaller than the actual raw voxel data (most of the voxels don't actually end up needing geometry). better probably would have been using "voxel indices" instead, but sadly this would require some non-trivial changes and introduce new possible issues (need to re-index voxels on block-updates, ...). (it is a hard choice between the issues of indices and currently needing to spend 8-bytes for every voxel, when many/most chunks only have a small number of unique voxel types...).

 

 

also considered a few times was possibly separating the high-level object types from their rendering (using generic "Mesh" objects for nearly everything), but sadly this hasn't been done.

 

as-is, there are a few semi-generic cases though:

Mesh: generally represents a piece of static geometry;

Brush: pretty much anything that looks like a world brush (may include patches and meshes as well as plane-bound brushes);

Model: pretty much anything that is instantiated and placed in the scene, uses vtables for sub-types (may hold Mesh or other more specialized model types, such as skeletal models or sprites);

Region/Chunk: main systems used by the terrain system (a Chunk is basically an elaborate case of a mesh, with more involved rendering logic to deal with the various cases, and may potentially hold raw voxel data, and potentially also Mesh or Light objects, *2);

...

 

 

*2: visible chunks currently optionally hold voxel data. it may be absent in cases where the engine has decided to store it in an RLE-packed format.

 

the Region object may also hold Brush or Entity/SEntity objects (as an alternative to storing them in World), but this still isn't really fully developed (spawned entities are not persistent, ...). note that these are stored using Region-local coordinates. I chose region for holding this stuff (vs Chunk) mostly due to it being a slightly more convenient size and involving probably less overhead.

 

 

granted, a lot of this may be N/A depending on the type of game.



#10 L. Spiro   Crossbones+   -  Reputation: 11939

Like
2Likes
Like

Posted 15 July 2013 - 04:21 PM

You would be better off just using a texture pointer directly, or better still a shared pointer to a texture. Textures may have ID’s (and they very well should, to prevent redundant texture loads), but using that ID rather than a direct pointer wastes precious cycles, as you then need to find that texture via binary searches, hash tables, etc.

 
 
Sorry but I didn't understand it, what do you mean by not using the ID, but the pointer instead?
ResourceObject will have a pointer to a Material which has the textureID, that's the unsigned int that gets the value from the glGenTextures(count, unsigned int*) function, and I have that so I can keep track if the texture's been loaded in the RAM (has a value > 0) or not (any value > 0), and when I have to draw I call glBindTexture(GL_TEXTURE_2D, object->material.textureID).
Is there a better way to do this?

OpenGL may be a state machine, but you need to think of each object you get from OpenGL (textures, vertex buffers, etc.) as…objects. In other words, a class called CTexture2D (use whatever naming convention you like) completely wraps around and manages the lifetime of the GLuint that OpenGL gives you.
That GLuint does not exist in the eyes of any other class in your whole engine. Nothing needs to know what that thing is because that is the soul responsibility of the CTexture2D class.
Here are actual examples from my own engine:
	// == Various constructors.
	LSE_CALLCTOR COpenGlStandardTexture::COpenGlStandardTexture() :
		m_uiTexture( 0 ) {
	}
	LSE_CALLCTOR COpenGlStandardTexture::~COpenGlStandardTexture() {
		ResetApi();
	}
	/**
	 * Activate this texture in a given slot.
	 *
	 * \param _ui32Slot Slot in which to place this texture.
	 * \return Returns true if the texture is activated successfully.
	 */
	LSBOOL LSE_CALL COpenGlStandardTexture::Activate( LSUINT32 _ui32Slot ) {
		if ( !m_uiTexture ) { return false; }
		if ( m_ui32LastTextures[_ui32Slot] != m_ui32Id ) {
			m_ui32LastTextures[_ui32Slot] = m_ui32Id;
			COpenGl::ActivateTexture( _ui32Slot );
			::glBindTexture( GL_TEXTURE_2D, m_uiTexture );
		}

		return true;
	}
	/**
	 * Reset everything related to OpenGL textures.
	 */
	LSVOID LSE_CALL COpenGlStandardTexture::ResetApi() {
		if ( m_uiTexture ) {
			::glDeleteTextures( 1, &m_uiTexture );
			m_uiTexture = 0;
		}
	}
	/**
	 * Create an OpenGL texture and fill it with our texel data.  Mipmaps are generated if necessary.
	 *
	 * \return Returns true if the creation and filling of the texture succeeds.  False indicates a resource error.
	 */
	LSBOOL LSE_CALL COpenGlStandardTexture::CreateApiTexture() {
		ResetApi();

		// Generate a new texture.
		::glGenTextures( 1, &m_uiTexture );
		glWarnError( "Uncaught" );	// Clear error flag.
		::glBindTexture( GL_TEXTURE_2D, m_uiTexture );

		[…] LOTS OF PREPARATION CODE OMITTED.


		// Set filters etc. first (improves performance in many drivers).
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, iMagFilter[ui32FilterIndex] );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, iMinFilter[ui32FilterIndex] );
		if ( m_ui32Usage & LSG_TFT_ANISOTROPIC ) {
			::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, CStd::Max<LSUINT32>( 1UL, CFnd::GetMetrics().ui32MaxAniso >> 1UL ) );
		}

		static const GLint iClamp[] = {
			GL_CLAMP_TO_EDGE,
			GL_REPEAT,
			GL_MIRRORED_REPEAT,
		};
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, iClamp[m_wmWrap] );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, iClamp[m_wmWrap] );

		if ( glWarnError( "Error setting filters on standard texture" ) ) {
			ResetApi();
			COpenGlStandardTexture::ClearLastSetTexAndUnbindAll();
			return false;
		}


		// Now fill in the texture data.
		switch ( m_iTexels.GetFormat() ) {
			case LSI_PF_DXT1 : {}
			case LSI_PF_DXT3 : {}
			case LSI_PF_DXT5 : {
				COpenGl::glCompressedTexImage2D( GL_TEXTURE_2D, 0, GetOpenGlInternalFormatStandard( m_iTexels.GetFormat() ),
					piUseMe->GetWidth(), piUseMe->GetHeight(), 0,
					CDds::GetCompressedSize( piUseMe->GetWidth(), piUseMe->GetHeight(), CDds::DdsBlockSize( m_iTexels.GetFormat() ) ),
					piUseMe->GetBufferData() );
				break;
			}
			case LSI_PF_DXT2 : {}
			case LSI_PF_DXT4 : {
				return false;
			}
			default : {
				::glTexImage2D( GL_TEXTURE_2D, 0, GetOpenGlInternalFormatStandard( m_iTexels.GetFormat() ),
					piUseMe->GetWidth(), piUseMe->GetHeight(), 0, GetOpenGlFormatStandard( m_iTexels.GetFormat() ),
					GetOpenGlTypeStandard( m_iTexels.GetFormat() ), piUseMe->GetBufferData() );
			}
		}
		

		if ( glWarnError( "Error creating standard texture" ) ) {
			ResetApi();
			return false;
		}
		return true;
	}
Notice that the class manages the lifetime of m_uiTexture and correctly destroys it if the class itself its destroyed. This ensures no resource leaks are possible, at least at this level.
Every resource you get from OpenGL should be managed similarly.

That means that your material will never ever have the GLuint ID of an OpenGL texture. It will have a (hopefully shared) pointer to CTexture2D or such.
You asked about DirectX below. This is also heavily related to that, and I will explain below.

 

Those are usually just characters, ammunition/weapons, and various other props. Other things, such as terrain, vegetation, buildings (especially interiors), skyboxes, etc., are all made efficient by utilizing rendering pipelines unique to themselves.

 
 
Oh, I totally forgot about those... would creating a specific object for each of these types be the best solution? Something like a TerrainObject which doesn't need to have a mesh, but materials only and the height data?

Probably.


 

When you break it down like this it starts to make sense that models should actually just load themselves. Models know what models are. They know what their binary format is. There is no reason to push the responsibility of the loading process onto any other module or class.

 
 
Wouldn't this make it harder for different games? I was going for the ResourceEngine with the idea that I'll be building my resource binary files from another program, my map editor, which will probably be different for each game, and I'd only have code to writing/reading from disk in that class so it's easier to change for the game I'm writing.

If you change your model’s format you will have to change loading code somewhere either way. It is no more difficult to open and edit “Model.cpp” than “ResourceManager.cpp”.
But it will be much cleaner and easier to browse to the location where you want to edit if you keep the loading code separate, because Model.cpp will not be a monolithic 10,000-line file.

 

The main problem you have here is that the graphic module knows what a “ResourceObject” (a model) is when it should be entirely the other way around.

 
 
I remember that at first, my idea was to have the models load & draw themselves, they'd have access to the draw functions, but I choose to have a single class responsible for all the drawing and the models only being objects with their data stored so it's easier to change the draw functions if I ever need it.
I'm using OpenGL for now, so all the OpenGL functions are in this GraphicEngine class and I just need to know how should I display an object's data on screen, and if I were to change to Direct3D or something else specific (mobiles that don't use OpenGL) I would only have to rewrite this part.
 
I don't know if this ever happens, but is there a better way to deal with this?

For a model to be able to draw itself and for all of the rendering commands to be neatly located inside the graphics module and only the graphics module are not mutually exclusive.

When I said a model would draw itself, I didn’t mean that it would call OpenGL functions directly. The graphics module should hide away all the calls to OpenGL/DirectX and provide a single unified interface that the models can use to set blending modes and render.

In my engine, the LSGraphicsLib library has a CFnd class which if, of course, the foundation for all graphics. Here are examples.
OpenGL:
	/**
	 * Sets culling to on or off.
	 *
	 * \param _bVal Whether culling is enabled or disabled.
	 */
	LSE_INLINE LSVOID LSE_FCALL COpenGl::SetCulling( LSBOOL _bVal ) {
		_bVal = _bVal != false;
		if ( m_bCulling != _bVal ) {
			m_bCulling = _bVal;
			if ( _bVal ) { ::glEnable( GL_CULL_FACE ); }
			else { ::glDisable( GL_CULL_FACE ); }
		}
	}

	/**
	 * Sets the cull winding order.
	 *
	 * \param _cmMode The cull winding.
	 */
	LSE_INLINE LSVOID LSE_FCALL COpenGl::SetCullMode( LSG_CULL_MODES _cmMode ) {
		assert( _cmMode == LSG_CM_CW || _cmMode == LSG_CM_CCW );
		if ( m_cmCullMode != _cmMode ) {
			m_cmCullMode = _cmMode;
			static const GLenum eModes[] = {
				GL_CW,
				GL_CCW
			};
			::glFrontFace( eModes[_cmMode] );
		}
	}
DirectX 9:
	/**
	 * Sets culling to on or off.
	 *
	 * \param _bVal Whether culling is enabled or disabled.
	 */
	LSE_INLINE LSVOID LSE_FCALL CDirectX9::SetCulling( LSBOOL _bVal ) {
		_bVal = _bVal != false;
		if ( m_bCulling != _bVal ) {
			m_bCulling = _bVal;
			if ( !_bVal ) {
				m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE );
			}
			else {
				static D3DCULL cModes[] = {
					D3DCULL_CCW,
					D3DCULL_CW
				};
				m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, cModes[m_cmCullMode] );
			}
		}
	}

	/**
	 * Sets the cull winding order.
	 *
	 * \param _cmMode The cull winding.
	 */
	LSE_INLINE LSVOID LSE_FCALL CDirectX9::SetCullMode( LSG_CULL_MODES _cmMode ) {
		assert( _cmMode == LSG_CM_CW || _cmMode == LSG_CM_CCW );
		if ( m_cmCullMode != _cmMode ) {
			m_cmCullMode = _cmMode;
			if ( m_bCulling ) {
				static const D3DCULL cCull[] = {
					D3DCULL_CCW,
					D3DCULL_CW
				};
				m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, cCull[_cmMode] );
			}
		}
	}
Then in LSFFnd.h (the .h for the CFnd class), I do this:
#ifdef LSG_OGL
	typedef COpenGl						CBaseApi;
#elif defined( LSG_OGLES )
	typedef COpenGlEs					CBaseApi;
#elif defined( LSG_DX9 )
	typedef CDirectX9					CBaseApi;
#elif defined( LSG_DX10 )
	typedef CDirectX10					CBaseApi;
#elif defined( LSG_DX11 )
	typedef CDirectX11					CBaseApi;
#endif	// #ifdef LSG_OGL

[…]

	/**
	 * Sets culling to on or off.
	 *
	 * \param _bVal Whether culling is enabled or disabled.
	 */
	LSE_INLINE LSVOID LSE_FCALL CFnd::SetCulling( LSBOOL _bVal ) {
		CBaseApi::SetCulling( _bVal );
	}

	/**
	 * Sets the cull winding order.
	 *
	 * \param _cmMode The cull winding.
	 */
	LSE_INLINE LSVOID LSE_FCALL CFnd::SetCullMode( LSG_CULL_MODES _cmMode ) {
		CBaseApi::SetCullMode( _cmMode );
	}
Now the graphics module is doing its job. My models and terrain can call CFnd::SetCulling() and CFnd::SetCullMode() without having to worry about the underlying API.
To finish the example of my OpenGL texture class, here are the same methods on the DirectX 9 side of things:
	// == Various constructors.
	LSE_CALLCTOR CDirectX9StandardTexture::CDirectX9StandardTexture() :
		m_pd3dtTexture( NULL ) {
	}
	CDirectX9StandardTexture::~CDirectX9StandardTexture() {
		ResetApi();
	}
	/**
	 * Activate this texture in a given slot.
	 *
	 * \param _ui32Slot Slot in which to place this texture.
	 * \return Returns true if the texture is activated successfully.
	 */
	LSBOOL LSE_CALL CDirectX9StandardTexture::Activate( LSUINT32 _ui32Slot ) {
		if ( !m_pd3dtTexture ) { return false; }
		if ( m_ui32LastTextures[_ui32Slot] != m_ui32Id ) {
			m_ui32LastTextures[_ui32Slot] = m_ui32Id;
			if ( FAILED( CDirectX9::GetDirectX9Device()->SetTexture( _ui32Slot, m_pd3dtTexture ) ) ) { return false; }
			CDirectX9::SetMinMagMipFilters( _ui32Slot, m_tftDirectX9FilterType, m_tftDirectX9FilterType, m_bMipMaps ? D3DTEXF_LINEAR : D3DTEXF_NONE );
			CDirectX9::SetAnisotropy( _ui32Slot, m_bMipMaps ? (CFnd::GetMetrics().ui32MaxAniso >> 1UL) : 0UL );
			
			static const D3DTEXTUREADDRESS taAddress[] = {
				D3DTADDRESS_CLAMP,
				D3DTADDRESS_WRAP,
				D3DTADDRESS_MIRROR,
			};
			CDirectX9::SetTextureWrapModes( _ui32Slot, taAddress[m_wmWrap],
				taAddress[m_wmWrap] );
		}
		return true;
	}
	/**
	 * Reset everything related to Direct3D 9 textures.
	 */
	LSVOID LSE_CALL CDirectX9StandardTexture::ResetApi() {
		CDirectX9::SafeRelease( m_pd3dtTexture ); // Sets m_pd3dtTexture to NULL by itself.
	}
	/**
	 * Create a DirectX 9 texture and fill it with our texel data.  Mipmaps are generated if necessary.
	 *
	 * \return Returns tue if the creation and filling of the texture succeeds.  False indicates a resource error.
	 */
	LSBOOL LSE_CALL CDirectX9StandardTexture::CreateApiTexture() {
		CDirectX9::SafeRelease( m_pd3dtTexture );

		[…] LOTS OF SETUP CODE OMITTED.

		// Reroute DXT textures.
		switch ( piUseMe->GetFormat() ) {
			case LSI_PF_DXT1 : {}
			case LSI_PF_DXT2 : {}
			case LSI_PF_DXT3 : {}
			case LSI_PF_DXT4 : {}
			case LSI_PF_DXT5 : {
				return CreateApiTextureDxt( (*piUseMe), ui32Usage, pPool );
			}
		}


		// Create a blank texture.
		HRESULT hRes = CDirectX9::GetDirectX9Device()->CreateTexture( piUseMe->GetWidth(), piUseMe->GetHeight(),
			m_bMipMaps ? 0UL : 1UL,
			ui32Usage,
			d3dfTypes[piUseMe->GetFormat()], pPool,
			&m_pd3dtTexture, NULL );
		if ( FAILED( hRes ) ) {
			return false;
		}
		

		[…] LOTS OF MIPMAP ETC. CODE OMITTED.

		return true;
	}
 

I'm sorry about the questions, I appreciate very much these replies, but I'd just like to ask the "why"s to understand why something is better than another and such.

If you really want to tackle full and smooth input systems, I have described much of the process here.
 
 

what if the ID is the offset into an array of shared texture pointers?

It isn’t bad on performance, but what is the purpose?
This could be just a matter of personal taste so I won’t really say it is wrong etc., but I personally would not go this way, mostly for reasons a bit higher-level than this.

My preference is not to have a global texture manager in the first place. Textures used on models are almost never the same as those used on buildings and terrain, so using a manager to detect and avoid redundant texture loads between these systems separately makes sense, and improves performance (searches will take place on smaller lists) while offering a bit of flexibility.
Each manager is also tuned specifically for that purpose. The model library’s texture manager has just the basic set of features for preventing the same texture from being loaded into memory twice. The building library uses image packages similar to .WAD files (a collection of all the images for that building in one file) and its texture manager is designed to work with those specifically.
Finally, the terrain library’s texture manager is well equipped for streaming textures etc.


Ultimately that is why I prefer to just have a pointer to the texture itself. No limits that way.
But it isn’t a big deal if you want to use indices in an array.


L. Spiro

Edited by L. Spiro, 13 January 2014 - 01:28 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#11 KaiserJohan   Members   -  Reputation: 1004

Like
0Likes
Like

Posted 16 July 2013 - 04:56 AM

Out of curiosity, I have my renderer implementation creating meshes/textures, which are then attached onto the scene-objects 'models' and 'materials' . If the renderer should only create the buffers, what kind of module should be responsible for knitting them together into textures (and then materials) and meshes?

I don't see a direct reason for the additional abstractation, hence my question. 



#12 L. Spiro   Crossbones+   -  Reputation: 11939

Like
1Likes
Like

Posted 16 July 2013 - 06:38 PM

They aren’t responsible for just the buffer parts.

 

The CVertexBuffer class also has an interface for filling in the vertices, normals, etc.  It has to be very dynamic and the implementation is non-trivial, since it must be as flexible as possible for all possible use-cases, so in this this topic for his first engine I simplified my explanation.

 

For textures though a small distinction should be made.

A .PNG image could be loaded from disk, scaled up to the nearest power-of-2, and saved as a .BMP or your own custom format, all while never having been a texture seen by the GPU.

So in that case don’t confuse a CImage class with a CTexture2D (these are just example class names).

 

There are in fact so many things you can do to images specifically without them ever being used as textures or seen by the GPU that in my engine there is an entire library just for image loading and manipulating (LSImageLib).

My DXT compression tool links to LSImageLib.lib and not to LSGraphicsLib.lib, even though it is clearly meant only for making textures specifically for the GPU.

 

Just because images often get used by the GPU, do not conceptually stumble over the different responsibilities between images and textures.  A CImage has the task of loading and manipulating image data, possibly converting it to DXT etc., but has no idea what a GPU or texture is.  A CTexture2D’s job is specifically to take that CImage’s texel information and send that to the GPU.  It is a bridge between image data and GPU texture data.

 

 

That is how textures get knit together and vertex buffers do a little more than just handle the OpenGL ID given to them.

As far as meshes go, a “mesh” doesn’t even necessarily relate to everything that can be rendered and as such has no business being a concept of which the renderer is aware.

GeoClipmap terrain for example is not a mesh, nor is volumetric fog.

In fact, at most meshes are related to only a handful of different types of renderables.

 

Each type of renderable knows how it itself is supposed to be rendered, thus they are above the graphics library/module.  If in a hierarchy of engine modules/libraries the graphics module/library is higher than models, terrain, etc., it is exactly backwards.

 

I am not sure of to what abstraction you are referring.

The graphics library exposes primitive types such as vertex buffers and textures that can be created at will by anything.

Above that models, terrain, and volumetric fog create the resources they need from those exposed by the graphics library by themselves.

Above that a system for orchestrating an entire scene’s rendering, including frustum culling, render-target swapping, sorting of objects, post-processing, etc., needs to exist.

 

This higher-level system requires cooperation between other sub-systems.

The only library in your engine that actually knows about every other library and class is the main engine library itself.  Due to its all-encompassing scope (and indeed its very job is to bring all of the smaller sub-systems together and make them work together) it is the only place where a scene manager can exist, since that is necessarily a high-level and broad-scoped set of knowledge and responsibilities.

 

All objects in the scene are held within the scene manager.  It’s the only thing that understands that you have a camera and from it you want to render all the objects, perform post-processing, and end up with a final result that could either be sent to the display or further modified by another sub-system (what happens after it does its own job of rendering to the request back buffer is none of its concern—it only knows that it needs to orchestrate the rendering process).

 

People are usually not confused on the way logical updates happen, only on how renders happen, but they are exactly the same.  In logical updates, the scene manager uses its all-empowering knowledge to gather generic physics data on each object in the scene and hands that data off to the physics engine which will do its math, update positions, and perform a few callbacks on objects to handle events, updated positions, etc.

The physics engine doesn’t know what a model or terrain is.  It only knows about primitive types such as AABB’s, vectors, etc.

 

Nothing changes when we move over to rendering.

The scene manager does exactly the same thing.  It culls out-of-view objects with the help of another class (again, a process that only requires 6 planes for a camera frustum, an octree, and AABB’s—nothing specific to any type of scene object) and gathers relevant information on the remaining objects for rendering.

 

The graphics library can assist with rendering by sorting the objects, but that does not mean it knows what a model or terrain is.

It’s exactly like a physics library.  You gather from the models and terrain the small part of information necessary for the graphics library to sort objects in a render queue.  That means an array of texture ID’s, a shader ID, and depth.  It doesn’t pull any strings.  It is very low-level and it works with only abstract data.

 

Why?

How could Bullet Physics be implemented into so many projects and games if it knew what terrain and models were?

The physics library and graphics library are at exactly the same level in the overall engine hierarchy: Below models, terrain, and almost everything else.

 

 

L. Spiro


Edited by L. Spiro, 17 July 2013 - 02:55 AM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#13 Norman Barrows   Crossbones+   -  Reputation: 1835

Like
0Likes
Like

Posted 17 July 2013 - 06:01 AM


 
Norman Barrows, on 15 Jul 2013 - 09:51 AM, said:
what if the ID is the offset into an array of shared texture pointers?

 

 

It isn’t bad on performance, but what is the purpose?
This could be just a matter of personal taste so I won’t really say it is wrong etc., but I personally would not go this way, mostly for reasons a bit higher-level than this.

My preference is not to have a global texture manager in the first place. Textures used on models are almost never the same as those used on buildings and terrain, so using a manager to detect and avoid redundant texture loads between these systems separately makes sense, and improves performance (searches will take place on smaller lists) while offering a bit of flexibility.
Each manager is also tuned specifically for that purpose. The model library’s texture manager has just the basic set of features for preventing the same texture from being loaded into memory twice. The building library uses image packages similar to .WAD files (a collection of all the images for that building in one file) and its texture manager is designed to work with those specifically.
Finally, the terrain library’s texture manager is well equipped for streaming textures etc.


Ultimately that is why I prefer to just have a pointer to the texture itself. No limits that way.
But it isn’t a big deal if you want to use indices in an array.


L. Spiro

 

i tend to take a CSG approach to drawing things ( i used to use POVRAY to create game graphics before DirectX was invented, hence the CSG approach).. i have textures, meshes, models, and animations. models are simply a collection of meshes and textures.  models can be static or animated. everything is interchangeable. you can use any mesh with any texture. any model can use any mesh and any texture. any model can use any animation designed for any model with a similar limb layout.

 

in the previous version of my current project, i still associated all textures with specific models. there i used a texture file name to texture ID (index of array of d3dxtetures) lookup at load time to avoid duplicate loads, similar to what you do for character textures..  array indices were assigned in the order textures were loaded, and the model's texture ID's were set at load time. so in the model the texture was specified by filename, but in RAM its was specified as D3DXtexture[n], as in settex(tex[n]).

 

 

with the current version, i went to fully shared global resources. there's a pool of 300+ meshes and 300+ textures to work with to draw anything you want, or to make a model with. if you need a new mesh or texture, you just add it to the pool. then it will also be available for use for drawing other things or improving existing graphics. needless to say i can get a lot of reuse and mileage out of a given mesh or texture. as an example, the current project CAVEMAN 3.0 is a fps/rpg with a paleolithic setting. yet i do the entire game with just 2 rocks meshes! <g>. reuse keeps the total meshes and textures low, so no paging or load screens, just one load of everything at game start, like it should be. now if we could just figure out how to eliminate graphics load times all together...<g>

 

for a traditional non-CSG approach, splitting it based on data type makes sense. skinned meshes, memory images of static models, and streaming terrain chunks. 3 different types of data, 3 different texture managers (loaders). 


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#14 corliss   Members   -  Reputation: 120

Like
1Likes
Like

Posted 17 July 2013 - 09:16 AM

Out of curiosity, I have my renderer implementation creating meshes/textures, which are then attached onto the scene-objects 'models' and 'materials' . If the renderer should only create the buffers, what kind of module should be responsible for knitting them together into textures (and then materials) and meshes?

I don't see a direct reason for the additional abstractation, hence my question. 

 

Basically people do this to separate difference concerns in the engine to make things more modular by reducing the dependencies between things. In the long run this helps with maintainability of the code base and the ability to swap things in and out based on different platforms the game is running on. In general the concerns about loading a resource are:

  1. How does an asset get loaded from disk 
  2. How is the asset data from disk transformed into something the engine can use (lets call the end result a resource)
  3. How are resources aggregated together

 

Most people will put #1 and #2 together, and some put all three together. At my job #1 and #2 are separate but #2 and #3 are sometimes put together.

 

For #1 we have an asset loading system that handles the platform specifics of loading things off of disk/disc/memory cards/etc. It also tracks if we already loaded the resource, how many references there are to the asset, etc. It does not know how to take the raw asset data and convert it to something use-able by the game.

 

For #2 there is the idea of a translator that gets called by the asset loading system that will get the raw asset data and translates/transforms the raw data into the actual item that was being requested (a texture, a mesh, a scene, etc). The translator can ask the graphics system to create the resource or could pass the data to a factory object that can then build the requested resource. The translators are separate from the asset loading system and are registered with it.

 

The factory system we use for scenes, materials, models and meshes do both #2 and #3. Once they have the raw data given to them by the translator the factories go about parsing the data and asking the graphics system to create the necessary resources (mainly buffers) or requesting the asset system to load the asset for it (such as textures and shaders), which is handling question/concern #2. Then the factory will call setters on the appropriate class (a Mesh, Model, Material, Scene class) to store the resources, handling #3.

 

The translator does not reference a factory directly but instead will create a class of the given type (a Scene for example) and will call a method on that class to load itself using the raw data passed in. The class internally uses the factory to do the work of parsing the data and everything else.

 

Here is an example flow for loading a material.

 

A scene will have in its binary data the material definitions it uses. So the scene factory will get the material chunk and ask the material factory to handle it. The material factory will parse the material data and figure out the names of the textures that the material is using along with the various parameters that will be set on the shader. To load the texture the material factory will request the asset system to load the texture by name and gets a handle to the asset in return. The material can wait for the material to be loaded by checking the asset handle (bad idea) or check it periodically until it is loaded (which we do).

 

In the meantime the asset loading system is getting the data and calls the texture translator. The texture translator will request the graphics system to create the texture (which returns an interface to a texture so that we can hide differences between different graphics APIs behind it) from the raw binary data and then puts a reference to the texture interface into the asset handle. This is the same handle that the material has a reference to.

 

When the material sees that the asset has been loaded, it gets the texture interface from the handle and adds it to its internal structure that holds references to the textures it needs.

 

Not all of our assets use factories. Textures and shaders are two examples where the translator will ask the graphics system to create the requested type instead of going through a factory. So the flow for loading these items is slightly different than the example. For example a render object that wants to use a particular shader will ask the asset loading system to load the shader. The render object will then periodically check if the shader is loaded by asking the asset handle. Once it is loaded and translated the render object  will get the shader from the asset handle and use it. No factories involved in this case.

 

So that is one way of putting things together without having the graphics system put it all together and associate it with the scene objects. I hope this gives you a different perspective on how things can be loaded and associated together in a game engine.

 

Not saying this is the best or right way, just a different way. :)



#15 BGB   Crossbones+   -  Reputation: 1545

Like
0Likes
Like

Posted 17 July 2013 - 01:51 PM

in my case, things are divided up more like this:

 

there is currently a library which manages low-level image loading/saving stuff, and also things like DXTn encoding, ...

this was originally part of the renderer, but ended up being migrated into its own library, mostly because the ability to deal with a lot of this also turned out to be relevant to batch tools, and because the logic for one of my formats (my customized and extended JPEG variant) started getting a bit bulky.

 

another library, namely the "low-level renderer" mostly manages loading textures from disk into OpenGL textures (among other things, like dealing with things like fonts and similar, ...).

 

then there is the "high level renderer", which deals with materials ("shaders") as well as real shaders, 3D models, and world rendering.

internally, they are often called "shaders" partly as the system was originally partly influenced by the one in Quake3 (though more half-assed).

calling them "materials" makes more sense though, and creates less confusion with the use of vertex/fragment shaders, which the system also manages.

 

materials are kind of ugly, as partly they seem to belong more in the low-level renderer than in the high-level renderer.

also, the need to deal with them partly over on the server-end has led to a partially-redundant version of the system existing over in "common", which has also ended up with redundant copies of some of the 3D model loaders (as sometimes this is also relevant to the server-end).

 

this is partly annoying as currently the 2 versions of the material system use different material ID numbers, and it is necessary to call a function to map from one to another prior to binding the active material (lame...).

 

worse still, the material system has some hacking into dealing with parts of the custom-JPEG variants' layer system (in ways which expose format internals), and with making the whole video-texture thing work, which is basically an abstraction failure.

 

 

this would imply a better architecture being:

dedicated (renderer-independent) image library for low-level graphics stuff;

* and probably separating layer management from the specific codecs, and providing a clean API for managing multi-layer images, and also providing for I/P Frame images (it will also need to deal with video, and possibly animation *1). as-is, a lot of this is tied to the specific codecs. note: these are not standard codecs (*2).

 

common, which manages low-level materials and basic 3D model loading, ...

* as well as all the other stuff it does (console, cvars, voxel terrain, ...).

 

low-lever renderer, which manages both textures, and probably also rendering materials, while still keeping textures and materials separate.

* sometimes one needs raw textures and not a full material.

 

high-level renderer, which deals with scene rendering, and provides the drawing logic for various kinds of models, ...

probably with some separation between the specific model file-formats and how they are drawn (as-is, there is some fair bit of overlap, better would probably be if a lot of format-specific stuff could be moved out of the renderer, say allowing the renderer to more concern itself with efficiently culling and drawing raw triangle meshes).

 

the uncertainty then is how to best separate game-related rendering concerns (world-structure, static vs skeletal models, ...) from the more general rendering process (major rendering passes, culling, drawing meshes, ...). as-is, all this is a bit of an ugly and tangled mess.

 

 

*1: basically, the whole issue of using real-time CPU-side layer composition, vs using multiple drawing passes on the GPU, vs rendering down the images offline and leaving them mostly as single-layer video frames (note: may still include separate layers for normal/bump/specular/luminance/... maps, which would be rendered down separately). some of this mostly came up during the initial (still early) development of a tool to handle 2D animation (vs the current strategy of doing most of this stuff manually frame-at-a-time in Paint.NET), where generally doing all this offline is currently the only really viable option.

 

*2: currently, I am still mostly still using my original expanded format (BTJ / "BGBTech JPEG"), mostly as my considered replacement "BTIC1" wasn't very good (speed was good, but size/quality was pretty bad, and the design was problematic, it was based on additional compression of DXTn). another codec was designed and partly implemented which I have called "BTIC2B", which is designed mostly to hopefully outperform BTJ regarding decode speed, but is otherwise (vaguely) "similar" to BTJ. some format tweaks were made, but I am not really sure it will actually be significantly faster.

 

more so, there are "bigger fish" as far as the profiler goes. ironically, my sound-mixing currently uses more time in the profiler than currently is used by video decoding (mostly dynamic reverb). as well as most of the time going into the renderer and voxel code, ... (stuff for culling, skeletal animation, checking for block-updates, ...).



#16 Danicco   Members   -  Reputation: 449

Like
0Likes
Like

Posted 18 July 2013 - 11:14 AM

Thanks for all the advice, I've came up with a second and hopefully better layout, though I still have a few questions:

 

For general models:

class VertexBuffer
{
    public:
        unsigned int VBO_ID;
        float* vertexes, normals, uvs;
        int* indexes;
        //I was thinking in moving to a separate class, but a class just for this single variable feels like overdoing it. I don't think I'll be playing with the index order for models after the initial loading
};
class Material
{
    public:
        float[4] diffuse, ambient, specular;
        unsigned int TEXTURE_ID;
};
//This enum will define which shader to use for a specific Mesh
enum MeshType
{
    Normal, Glass, Etc
};
class Mesh
{
    public:
        VertexBuffer* VBO;
        Material* material;
        MeshType meshType;
};
class Model
{
    public:
        Mesh* mesh;
        int meshCount;
        void Load(); //comment below
        void Draw(); //comment below
};

 

I'd like to make Models load and draw themselves, but without giving them direct access to the disk or the reader, and I can't figure how. I'm thinking of how it would be better to do when developing, something like this:

 

void GameState::Initialize()
{
    //For this to work, I'd have to pass a pointer to my resource module to the model somehow
    Model* playerModel = new Model(&resourceModule);
    playerModel.Load("assetName");
 
    //Another way is to use a Model Manager, who has a pointer to the resource and can receive the reading stream
    Model* playerModel = modelManager.NewModel("assetName");    
}

 

The same for Draw(), as suggested, but I'm not sure how it would work:

 

void GameState::Draw()
{
    vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
    for(int i = 0; i < onScreenObjects.size(); i++)
    {
        //For this to work, the objects must have a pointer to my graphic module and call the functions
        onScreenObjects[i].Draw(&gameEngine.graphicModule); 
 
        //Or I call it from the module itself
        gameEngine.graphicModule.Draw(onScreenObject[i]);
    }
}

 

For the models to be able to draw themselves they have to "see" the graphic module somehow, but I'm not sure if it's okay to send the pointer for each Draw() call. The models would be tightly knotted to the module that I think I'd be nasty to upgrade/evolve, as a single change in the graphic module interface could mean lots of rewriting everywhere.

I know since it's my first engine I expect this to happen quite a lot, so I'm trying to make it easier for when it does.

 

For the Engine & Game:

class GameEngine
{
    public:
        virtual Initialize() = 0; //This will call all initialize functions on the modules that are required for running
        virtual Destroy();
        //Functions to load VBOs, activate Textures, and other draw calls
        GraphicModule graphics;
        //Functions to read/write to disk and find the proper stream for an asset by name or ID in my resource format files
        ResourceModule resources;
        //Still working on this
        SoundModule sounds;
        //Will just receive events from the OS and store them for use later, responsible for managing inputs be it keyboard & mouse, touch, etc
        InputModule inputs;
        //other modules to be added as needed
 
        //Responsible for creating window and managing input events depending on the OS, and other OS related stuff such as getting time
        OSModule oSystem;
};
class MyEngine : public GameEngine
{
    //implementation here
}
class Game
{
    public:
        //Using this engine interface I hope to be able to develop my engine further as I add new features and such, and still keep the games I'm currently developing working correctly with it.
        GameEngine* gameEngine;
        //Using LSpiro's post about Time
        GameTime* gameTime;
        //Using LSpiro's blog post about managing game state, this will be the current gameState
        GameState* gameState;
        //This will read from the inputModule only the inputs that are linked to a Game Action, time stamped, and will keep them in a buffer to be read from the Update() method
        GameInput* gameInput;
};

 

I know it isn't much specific, but I just want to get the general layout idea done in a good way so I don't bang my head later thinking "Why did I made it this way?"

 

Again, many thanks for the advices, I feel the idea is much more robust and organized now.



#17 L. Spiro   Crossbones+   -  Reputation: 11939

Like
3Likes
Like

Posted 18 July 2013 - 04:57 PM

For general models:

Much better than before, but a few major problems remain.
#1: Don’t use “unsigned int” for your VBO ID. OpenGL/OpenGL ES give you a GLuint, and that is what you use. Never assume the typedef for GLuint will be “unsigend int” no matter how much you “know” it is.

#2: Don’t put indices on a vertex buffer. An index buffer is not the responsibility of the VertexBuffer class. That goes to the IndexBuffer class. Your comment mentions it will just be a single variable. Why won’t it be another IBO? Why use VBO’s but not IBO’s? There are plenty of things to add to an IndexBuffer class, not just a single pointer. Besides, the same index buffer can be used with many vertex buffers.

#3: If you are considering later adding support for DirectX (as you mentioned you might) then the GLuint VBO_ID member does not belong on the VertexBuffer class.
Organize your classes this way:
VertexBufferBase // Holds all data and methods that are the same for all implementations of VertexBuffer classes across all API’s you ever want to support.  This would be the in-memory form of the vertex buffer data before it has been sent to the GPU.
VertexBufferOpenGl : public VertexBufferBase // Holds the GLuint ID from OpenGL and provides implementations for methods related directly to OpenGL.  I showed you an example of this with my COpenGlStandardTexture::CreateApiTexture() method.
VertexBuffer : public VertexBufferOpenGl // All high-level functionality of a vertex buffer common to all vertex buffers on any API.  Its CreateVertexBuffer() method prepares a few things and then calls CreateApiVertexBuffer(), which will be implemented by VertexBufferOpenGl.

In short, if the class name does not have “OpenGl” in it, that class has no relationship what-so-ever to OpenGL.
Later when you add DirectX support, you can do this:
VertexBufferDirectX9 : public VertexBufferBase

#ifdef DX9
VertexBuffer : public VertexBufferDirectX9
#elif defined( OGL )
VertexBuffer : public VertexBufferOpenGl
#else
#error "Unrecognized API."
#endif


I'd like to make Models load and draw themselves, but without giving them direct access to the disk or the reader, and I can't figure how.

As I mentioned in my initial post, use streams.  They are also fairly easy to roll on your own.
Once you are using streams, the model can load its data from memory, files, networks, whatever, all while not knowing the source of the object.

 

I'm thinking of how it would be better to do when developing, something like this:
 
 

void GameState::Initialize()
{
    //For this to work, I'd have to pass a pointer to my resource module to the model somehow
    Model* playerModel = new Model(&resourceModule);
    playerModel.Load("assetName");
 
    //Another way is to use a Model Manager, who has a pointer to the resource and can receive the reading stream
    Model* playerModel = modelManager.NewModel("assetName");    
}

I have an answer for this question, but before I can even get to it you need to fix another major problem.
How do you plan to handle a game where there are, for example, a bunch of enemy spaceships, but only 3 different kinds?
There are 50 of type A, 35 of type B, and 3 of type C.
Are you really going to load the model data for type A 50 times?

A Model class is supposed to be shared data, by a higher-level class called ModelInstance.
Model needs ModelManager to prevent the same model file from being loaded more than once, so your “Another way” is the only way.
Then you create all of your ModelInstance objects. ModelManager hands out shared pointers to the Model class that was either created for that instance or already created for another instance previously.




The same for Draw(), as suggested, but I'm not sure how it would work:
 
 

void GameState::Draw()
{
    vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
    for(int i = 0; i < onScreenObjects.size(); i++)
    {
        //For this to work, the objects must have a pointer to my graphic module and call the functions
        onScreenObjects[i].Draw(&gameEngine.graphicModule); 
 
        //Or I call it from the module itself
        gameEngine.graphicModule.Draw(onScreenObject[i]);
    }
}

Why would your GameState class make the final calls on the objects?
void GameState::Draw()
{
    gameScene.Draw(&gameEngine.graphicModule);
}
Done and done.
The scene manager can handle everything related to the objects in the scene. That is exactly its job.
Inside GameScene::Draw() it handles culling and orchestrates the entire rendering process.
And be 100% sure that you are not doing this:

vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
No reason to slowly copy a whole vector every frame.
Do this:
vector<GameObjects*> & onScreenObjects = gameScene.GetOnScreenObjectsSorted();
Or this:
vector<GameObjects*> onScreenObjects;
gameScene.GetOnScreenObjectsSorted(onScreenObjects);


For the models to be able to draw themselves they have to "see" the graphic module somehow, but I'm not sure if it's okay to send the pointer for each Draw() call. The models would be tightly knotted to the module that I think I'd be nasty to upgrade/evolve, as a single change in the graphic module interface could mean lots of rewriting everywhere.

If the graphics module instead knew what a model was, that would be ridiculously tight knotting.
If you have models and you have a graphics module and you want the models to be able to be drawn to your screen then you have to have some level of knotting no matter what.

Changing and evolving interfaces is a necessary part of the pain you are required to endure to join the ranks of a programmer.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#18 socky   Members   -  Reputation: 135

Like
0Likes
Like

Posted 21 July 2013 - 05:37 AM


Changing and evolving interfaces is a necessary part of the pain you are required to endure to join the ranks of a programmer.

 

Damn, that's an horribly accurate statement.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS