Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Oct 2005
Online Last Active Today, 09:31 AM

#5248942 My scene management failed

Posted by haegarr on 26 August 2015 - 04:01 AM

((EDIT: Damned editor is eating large portions of my post when I embed citations. So here we go without…))
You can ask ;) but any and all answers you get will not unburden you to make your own decisions and learn from what goes well and what goes wrong. There is no single right way. This is because a game is inherently complex. FWIW, I usually follow these guidelines:
1. Solve problems the top-down way; consider how the problem is embedded in the entirety; divide a problem recursively until getting eatable pieces.
2. A unit of software should have 1 concern or should be responsible for 1 thing (how blurry that concern ever is ;) ); if it had more than 1, then bullet point 1 isn't went along far enough.
3. Bullet points 1 and 2 leads naturally to the use of interacting sub-systems; build this as hierarchy; higher level system use lower level systems and work together with same level systems; lower level systems should not use higher level systems; a system should not use another directly that is more than 1 level below.
4. Use the data-driven approach where appropriate.
5. When OOP-ing, do use inheritance where appropriate but prefer composition over inheritance.
The game loop rules the coarse order in which things happen. You've seen in the "Game Engine Architecture" book (that of Jason Gregory, right?) that a specific order of animation steps on the level of the game loop helps to solve problems of dependencies. This is also true for other sub-systems. Blindly removing a game object without the
knowledge whether another sub-system is still working with it would be disastrous. Also not good would be to instantiate a new game object at a point in time where AI, animation, or collision detection already have been done. Hence having a defined point in the game loop, e.g. just after input gathering, where all game object addition and removal w.r.t. the scene happens, means that game objects do neither pop up nor disappear at the wrong moment. (So we have considered the environment in which spawning and removal happens, and have seen that it is beneficial to make it synchronously.) Well, such deferred addition and removal requires that (a) we use a kind of job object and (b) we have a sub-system where the jobs can be send to. Here the scene management comes into play. Because the concern of the scene management is to manage all game objects that live in the scene, it is the sub-system that can process said jobs. Furthermore, it shows that the scene management has an own point of time for updating within the game loop.
Now that we have introduced the scene management, should we put the scene graph into it? The scene graph has the purpose of propagating properties. This is a different concern than the existence of game objects, so no, it should not be part of the scene management as defined above. A scene graph is another structure used for another purpose. Similarly, say, an octree used for collision detection is an own structure, as well as a render job queue is, and so on. What structure does a scene manager then needs? It depends on the API. Until now we have said that game objects should be added and removed. We can use a handle concept and IDs for naming game objects, then 2 arrays would be sufficient to hold the indirections and the game objects themselves. This would also be sufficient to serve game object retrieval requests.
Resource management is another good example, because it occurs so often. What is the concern of resource management? The supply of resources for the game. Fact is that resource live persistently on a mass storage, what means that we are working with 2 copies of resources (the other one in RAM). Well, 2 copies are enough; we don't want more. The resource management should hide all these details. So, since we have used a bit fuzzy description of the concern of resource management, we identified 2 tasks belonging to it: resource caching and resource loading. Since these are 2 lower level concerns of resource management, we should implement resource management separated as a front-end that defines the API for clients, and a back-end by 2 delegate objects, one for caching and one for loading. The front-end manager then uses the back-end objects to fulfill the API. This can be done further down, e.g. the loader may use a file format wrapper to do the actual loading).
Well, all the above is somewhat general; it gives no explicit answers to your questions. Feel free to ask more questions, but remember that specific answers need specific questions. smile.png

#5248719 My scene management failed

Posted by haegarr on 25 August 2015 - 03:19 AM

#1. I do have a cache of Mesh objects in SceneManager (see "MeshCache _meshCache;"). The scene nodes don't store MeshData objects, but point to them.

Meshes (i.e. the shared part) are resources. Caching them is a task of resource management. Scene management, on the other hand, is responsible for all the entities that are currently in the scene. That are 2 distinct things.


#2. I basically have a SceneGraph object stored in SceneManager so that the user is able to get the pointer to that SceneGraph objects via sceneManager->getSceneGraphPtr(). Is that still wrong?

If I remember L. Spiro's usage of terms correctly, then the scene management deals with the existence of entities in the scene, while a scene graph propagates properties. That again are distinct concerns, and in this sense having a scene manager handling a scene graph would be wrong.


How do you handle animated models in your engine? [...]

That's the way I'm handling this (L. Spiro does it probably in another way)...


When a game object becomes part of the scene at last step during instantiation, it is represented by a couple of objects. The objects store their own necessary parameters (i.e. those that are unique for the instance) and usually also refer to commonly used resources. It is allowed for clients to overwrite references. Other clients are not interested in how the object is build as long as it provides the parameters the client is interested in.


An animation clip is a resource; it can be used by more than a single game objects. To be actually used, a game object needs an animation runtime object (similar to the MeshInstance mentioned above). The runtime object stores the current state of animation of that particular game object, while it refers to one or animation clips to have access to the common animation definition data. The clue now is that when the animation sub-system is running during the progress of the game loop, it will alter parameters of some runtime objects (besides animation runtime objects). This may be a 3D skeleton pose, a sprite attachment, or whatever. Notice that a skeleton also has a runtime object besides a defining resource. 


After running the animation sub-system, all animated game objects are again still for the remaining time until the game loop wraps around. A subsequent (CPU based) skinning process computes a new mesh. For the rendering sub-system there is no difference between animated and non-animated game objects, because the rendering just looks at belonging parameters and finds a mesh, a sprite, or whatever.


[…] What kind of files do you have? I guess I can create my own file formats for animated meshes (barbarian.mesh, barbarian.skel, barbarian.animdata) but is it really needed?

The file representation is detached from the in-memory representation, because the requirements are different. Okay, the file has to store data which later occur as resources. But whether they are stored in individual files or archive files, whether they are compressed or not, whether they grouped into load bundles, … is a question of the resource loading sub-system which itself is a lower level part of the resource management system.


It is not necessary to create an own file format as long as you are well-pleased with an existing one. As soon as you want to gain some loading performance by using in-place loading, support load bundles, support streaming, be interested in a unified resource loader, obfuscating your resources, … you probably need to define your own format, or look out for useable file formats specifically made for game content.

#5248531 Programming scientific GUI's, data and gui layout?

Posted by haegarr on 24 August 2015 - 08:17 AM

Well, software patterns are somewhat generic by definition; otherwise they would be available as library. Besides that, architectural patterns like MVC, MVP, MVVM, and the more advanced ones are actually what to look for desktop application, including scientific ones. Those patterns are about the separation of business data, their representation, and their manipulation. I suggest you to look for comparisons, because such comparisons should hint especially at typical use cases. Nevertheless, don't forget that patterns are just guidelines; don't hesitate to diverge when appropriate.


Totally unrelated from the GUI architecture is the question about the business data management. You should avoid to store original and derived data into the same object. Treat it like variables in a programming language: You have a variable with the original data, you apply an operator, and yield in a result that is stored in another variable. This is fine because you don't know how which operator will be applied, how often an operation will be applied, or to which data they will be applied. So you need to provide most flexibility to that storage system. May be an operator is allowed to overwrite its source (see below); but the general case of writing to a new variable should ever be available, and it must be available if the format of the output is different anyway.


Regarding the operators themselves … it depends. Do you need a history of applied operations? Need an undo be supported? Do you need macros / operation recording? Should the operations be re-applied if input data changes? Do you need a type system to distinguish data types?

#5248503 2D OpenGL lightning using shaders

Posted by haegarr on 24 August 2015 - 06:22 AM

Should it be like that ? I've tried so many modifications to my shaders and played with them but could not get what i want..

The texture and alpha should be as when the object is fully lit. It is the light that makes a scene bright or dark, not the scenery.


Then in the shader compute / sample a light value (some gray, usually, where black means unlit and white means full lit) and multiply (component by component) that light value with the texture color value. Regardless of the texture color value, when multiplied with the extreme 0,0,0,1 (no light) will yield in black, and multiply with extreme 1,1,1,1 (full light) will yield in the texture's color; anything in-between will yield in shades of the texture color.

#5248034 RPG, Engines and Frustration

Posted by haegarr on 21 August 2015 - 06:57 AM

So, where should I start ? Can anybody give me a path to follow that the final destination is an RPG like The Legend of Zelda: A Link to the Past ?
You said something about Tetris. How can I start with tetris? Should I use Unity2D for that? I have no idea.

As L. Spiro stated, don't start with the desired project! Perhaps even don't start with a real project at all. If you would start unprepared right into the desired project you'll get frustrated very quick due to all the unknown nitty-gritty that need to be handled, and that will definitely jeopardize the project. 


Since your goal is to finish a game, using an existing not just engine but tool like Unity (or Unreal, …) is IMHO the way to go. There are plenty of (video) tutorial for Unity and Unreal. Do not just look at them but get own experience by reenacting them (do not restrict yourself on tutorials related to RPG stuff here). So you get a feeling for the tool and how things are expected to work within. After doing so for some time, start to bring in own ideas / variations. Then start an own small game project. And only after that has been finished (need not be polished but, well, playable), plan out your desired project with your then existing experience and finally go for it.


Just my 2 cents, you know :)

#5247392 Quaternions for FPS Camera?

Posted by haegarr on 18 August 2015 - 08:37 AM

Right now I'm just playing with the camera. I've added it to the scene at 0, 0, 0. I've gotten it to rotate using camera.rotation.x/y. From what I've seen, other people seem to implement a vector that the camera "looks at" (using the .lookAt function). [...]

The look-at function is useful to align the camera once or tracking an object. It is just one possibility to control the camera.


IMO you should understand it so: The camera is an object in the world similar to a game object. It has a placement (position and orientation) and additionally field of view and other view related stuff. The camera by itself does not change its placement. Then you can apply the functionality of objects like LookingAt or Tracking or Parenting or … to control the placement in part or totally. That way gives you maximum flexibility.



[…] Really I just want to create a camera that allows me to look around in the scene. I put a cube at 0, 0, 5 and just want to be able to move the camera around and look at the cube from different angles. For the development I'm using Threejs (threejs.org).

Well, that sounds not like a FPS camera but a free camera perhaps with a tracking constraint control.
As said, I'd implement this as a camera object with a placement. The placement should be able to provide a matrix that stores the "local to global" spatial transform. The placement should provide an API for setting and altering position and orientation separately. Then I'd implement a camera control that processes input, generates movement from it, and applies it to the attached Placement (which, of course, belongs to the camera in this case).
I'd further implement a control Tracking that is to be parametrized with (a) a Placement that is to be tracked (its position, to be precise) and (b) a Placement that is the target to be altered (its orientation, to be precise). The math is so that the difference vector from the Placement.position of the target to the Placement.position of the tracked placement is used (after normalization) as forward vector of the typical look-at functionality. What need to be done then is that the control is invoked every time the Placement.position of the target object has been settled after being altered.

#5247371 How to revert the scale of a matrix?

Posted by haegarr on 18 August 2015 - 06:28 AM

It depends. In general you cannot reconstruct the history how the one available matrix was generated from the matrix alone. You can just decompose the matrix into an equivalent translational and scaling transform (letting rotation aside as mentioned in the OP), replacing the transform of interest with its desired substitute, and re-compose, so that the translational part is not effected. But if the composition was done so that the position was effected by a scaling (as e.g. in S1 * T * S2), then you cannot eliminate scaling totally (AFAIK).


So in your case decomposition is relatively easy, because in a homogeneous 3x3 matrix without rotation there is an embedded 2x2 matrix that is effected by scaling only but not by translation. You get this sub-matrix if you strip the row and the column where in the 3x3 matrix the homogenous "1" is located. The resulting sub-matrix must be a diagonal matrix, e.g. only the values at [0][0] and [1][1] differ from zero. Those both values are in fact the scaling factors along x and y axis directions, resp. Hence setting both these values to 1 will do the trick.

#5247335 Quaternions for FPS Camera?

Posted by haegarr on 18 August 2015 - 02:43 AM

Well, on the lowest level you just need setters for the camera's placement as usual for any game object. However, it is the camera control where all the nasty math really happens. As a 1st (or 3rd) person camera, the placement of the camera is bound to the player's game object by parenting. This is also standard stuff, well solved by concatenating matrixes. There usually is some freedom in rotation so that the player can look up / down and left / right relative to the orientation of the game object. On the other hand the rolling is often constraint to zero. This together would give the local orientation. The local position may be fixed for 1st person camera (as opposed to 3rd person cameras). There may be additions working on the local position like camera shaking or bobbing (a controversial topic though), depending on given circumstances. Without knowing how the camera should actually behave, giving tips for its implementation (besides the standard placement) is almost meaningless. So could you (the OP) describe us what exactly you want, how your general game object placement works, and what you have so far?

#5246668 My graphic objects (Gfx)

Posted by haegarr on 15 August 2015 - 05:35 AM

What do you think, please?  :-o

Well, I haven't got the clue what you want to do really. Honestly, "graphic objects are vectors of vectors of graphic objects" sounds to me not to be a sane concept. It appears to me that you mix several aspects (at least game object composition, its graphical representation, its world placement, its rendering parametrization, visibility culling result and even its temporal coherency) into a single object. That highly violates the single responsibility principle.


There are named constructors for each special type.

I never heard of "named constructors" and "untitled constructors" … where does that came from? Usually the term "constructor" is used for what you name "untitled constructor", and those others are usually called "factory methods" or perhaps "generators" or "creators".


2) Collections of direct objects work pleasantly faster than pointers.

Not necessarily. Swapping pointers is definitely more efficient than swapping any non-POD objects.


4) Unions is a way to not just alternate data types, but also name members in appropriate, readable way. (This is not intended union usage though)

Well, your usage of unions in the OP's code snippet is mostly meaningless because (a) you use anonymous unions and so they do not add to variable names, and (b) include a single variable only. The only case where the latter is not the case is with the union that includes "map" and "gfx_container". However, the factory method for sprites hint at the usage of both "map" and "base" for sprites, so that would corrupt your union data though!


BTW: All of your factory methods are incomplete because they actually do nothing. They further are specified to return a pointer but do return nothing at all.


Let's say, I planned 10 possible rendering orders, so my vector consists of 10 vectors.

What exactly is a "planned rendering order"?

#5245931 The best way to manage sprite sheet animations?

Posted by haegarr on 12 August 2015 - 04:03 AM

Any inherent complexity to a problem cannot be removed. What we used to do is to partition a complex problem into couple of less complex problems that are interconnected. When I look at the code in the above post I see that decision making, state control, animation playback as well as transitions all is handled at the same place. Moreover, the approach is less data driven then it can be. Don't get me wrong: There is no silver bullet that makes this kind of thing a breeze; one ever will have to run a bunch of condition checks in the one or other form as long as complex situation handling is wanted. And of course we want to please the players, don't we?


a) In this code snippet

 if (left_down)
    if (facing == FACE_LEFT) { animationPlayer.PlayAnimation(rollForwardAnimation); velocity.X = max_speed.X * -2; }
    if (facing == FACE_RIGHT) { animationPlayer.PlayAnimationBackward(rollForwardAnimation); velocity.X = max_speed.X * -2; }
    rolling = true;
    jump_type = 1;

because "rolling" is set regardless of the value of "facing", I assume that "FACE_LEFT" and "FACE_RIGHT" are the only both possible values. Even if not, the hint I want to give is valid anyway. Well, the inner distinction is made purely to be able to invoke a forward playback or else a reverse playback (the other stuff in the statement block is identical). Now having a 3rd variant of API invocation like

void AnimationPlayer::play( AnimationID selection, bool reverse )

would unify both cases by introducing an additional variable (data instead of code), so that an invocation like

 if (left_down)
    animationPlayer.play(rollForwardAnimation, facing == FACE_RIGHT);
    velocity.X = max_speed.X * -2;
    rolling = true;
    jump_type = 1;

would be sufficient.


b) Another issue with the above example is that the code implements a behavior (oh, we want to roll forward / backward) and knowledge about the animation details (oh, we do not have an animation for backward rolling, so we need to playback forward rolling but in reverse order). Why isn't there a definition for a rollBackwardAnimation clip, where the details of what frames and playback direction are hidden in the definition of the animation clip? 

 if (left_down)
    animationPlayer.play(facing == FACE_LEFT ? rollForwardAnimation : rollBackwardAnimation);
    velocity.X = max_speed.X * -2;
    rolling = true;
    jump_type = 1;


c) Now, how often is AnimationPlayer::play(…) invoked in a single run through the if-then hell? If it is more than once then obviously some wasting happens. IMHO, however, even 1 invocation is too much, with a similar reasoning as above: Decoupling. If possible the controller should not know about the animation system. From a logical point of view the animation happens later in the game loop when all controlling has been done already.

 if (left_down)
    movement = facing == FACE_LEFT ? rollForwardAnimation : rollBackwardAnimation;
    velocity.X = max_speed.X * -2;
    rolling = true;
    jump_type = 1;


d) There is some redundancy in the data here. We have "facing" with 2 possible states, "movement" with 3 possible states (including neither forward nor backward rolling), and velocity.X with 2 possible states (if reduced to be either less than 0 or to the left, greater than 0 or to the right, or equal to 0). Oh, and also "rolling". I do not know all the nitty-gritty of the implementation, but from what can be seen it seems me that 

 if (left_down)
    movement = rolling;
    velocity.X = max_speed.X * -2;
    jump_type = 1;

would be sufficient here.


e) Often abstracting the "physical" input is also beneficial. For example, the "left_down" and "right_down" variables smells like physical input for me. If this would be converted in advance into something along

bool actionRequested = left_down || right_down;
int direction = left_down ? -1 : right_down ? +1 : 0;

then the example would come down to

 if (actionRequested)
    movement = rolling;
    velocity.X = max_speed.X * 2 * direction;
    jump_type = 1;

and would already include the original if(right_down) branch. 


Of course, now the animation control has to look at the kind of movement and the velocity to be able to pick the correct animation clip. But notice that this stuff is moved out of the character control code, and hence we've followed the problem partitioning principle: We've divided up the complexity, one part is handled in the player control code and the other in the animation control code. Each one has noticeably less complexity than the overall problem.


The above example may not work well as substitute in your exact real project, but it should just show a way of how to approach such a problem in principle. Hopefully it helps :)

#5244605 Resource management questions

Posted by haegarr on 05 August 2015 - 02:05 AM

1. Obviously each resource is loaded separately and works differently, so I'm not sure if I have to create a generic cache class for all of them. I'm talking about something like this:

  Cache _soundCache;Cache _meshCache;Cache _textureCache;

Also some people say I should have a "ResourceLoader" class that manages loading from disk. The thing is that some resources like meshes and images are loaded from disk with a different library. For PNG images for example I use libpng, which manages loading .png images by itself. How am I supposed to encapsulate this inside a ResourceLoader class?

In a more sophisticated solution (for a game runtime, not for an editor) resources would not be loaded using generic file formats but by using a format that is more suitable for game runtimes (what means basically that it does not enforce recoding of the data, and even reduce interpretation at this level to a minimum).


If you don't have such a file format but several ones then you should also think about several loaders. Lordadmiral Drake has shown this in the pseudocode above. 


2. My second and more important question should be pretty simple but I find it very hard to answer.
Say I have this final GameEngine class. Where do I to put all the resource caches? Do they need to be private data members in GameEngine?
If the answer is yes, then obviously a Mesh resource will need a pointer to the Texture cache, because it loads textures. That means I have to pass pointers through classes all day long.
What options do I have here? What's the most viable approach?

IMHO: The approach you describe here does not separate the concerns enough. This probably is the reason of your problem.


A mesh resource should not load textures because its purpose is to define a shape. A mesh should not even load itself. A mesh should not even name textures so that a loader can load it. Instead, a (say) model resource should be available that serves as a declaration of which components the model need to be useable (like a bill of materials). When the model is about to be instantiated in the world, the scene management requests it from the content management. The content management requests all listed resources from the resource management. Any addressed resource manager then requests its associated cache and eventually instructs its associated resource loader(s) to load the resource data. The content management returns the instance for the model as soon as all resources (that are marked as mandatory) are available (or, if supported, at least surrogates are available).


So, when the game loop runs, it periodically enters a phase where the scene is managed w.r.t. the living instances. Here (somewhere before input and AI control happens) spawning and removal is done. Part of spawning is to make a new instance for the model, and hence requesting all belonging resources. This is the place where the content manager need to be accessed. The content manager hides the fact that each kind of resource has its own resource manager. Further, the resource managers hides the fact whether / how caching and loading are done.


BTW: I use the term "manager" with the meaning of a front-end that decides and delegates the actual tasks to attached helper objects. Well, that's the way managers work, isn't it? ;)

#5244600 Curved and sloped 3D tiles

Posted by haegarr on 05 August 2015 - 01:11 AM

Tiles need not be flat, also not at its perimeter. Make sure that neighbored tiles have the same kind of edge at the common seam, i.e. they use the same sequence of intermediate vertex positions. If you do this in the manner of module assembly you still have a limited set of shapes what probably makes the level design easier.


Some details to be considered are:


* There need to be a height information for every point where a dynamic game entity can be placed.

* In dependence on the camera height and pitch as well as the modeled slope, depth test and/or backface culling may be needed.

* The camera height need to be adjusted if the player character impends to leave the upper display edge.

* The simple top-down texture mapping looks ugly if the slope gets stronger due to texel stretching. You may support extra texture tile sizes.

#5244504 Collectable Items, Persistence and ECS Architectures

Posted by haegarr on 04 August 2015 - 09:17 AM

Is this is a sensible approach? Is this something that should sit outside of the ECS? [...]

IMHO your approach is fine because it allows the level designer to deal with the problem in a consistent way (assuming that s/he is familiar with ECS, of course ;) ) and it seems to fit smoothly into the technical point of view, too.


The only issue I personally would have with that particular solution is that the term "persistence" is broader. I.e. if I would hit a Persistence component I would understand it as a (perhaps even) collection of attributes that persist a room switch. It seems not clear why "persistence" 


[...]How have other people tackled this problem?

Well, I do not have this exact problem because in my engine the overall state is persistent anyway. So it includes the persistence but without the need of an explicit Persistence component. A spawn point that is never disabled and triggered on a "entering room" signal then will spawn each time, and one that is disabled does not.

#5244457 Normal map, height map, bump map... are all the same thing?

Posted by haegarr on 04 August 2015 - 03:04 AM

Most things are already mentioned in the posts above. Well, I think there need perhaps to be a little more accentuation on the difference of a map as a parametrization and the mapping as the applied technique.


A height map is an array of pixels with a single channel (hence it appears as grayscale image), where each of the pixels denote an elevation height w.r.t. a reference surface. It can be used with the bump mapping technique (and hence the map itself is also called a bump map) to simulate bumps and wrinkles on a surface without changing the geometry of the surface, not even temporarily. It can also be used for the displacement mapping technique where geometry is actually changed. Because a height map has only one channel, the displacement is restricted (usually along the normal of the surface onto which the mapping is applied). When applied to terrain (originally a flat horizontal surface), the bump map is sometimes also called an elevation map.


A full displacement map can be used, too, so that 3 channels are available, e.g. one for each of the directions normal, tangent, and bi-tangent to the surface.


A normal map is a map similar to a full displacement map, but instead of an displacement offset there is just a direction stored in the 3 channels. It cannot be used for displacement, because it lacks a distance. It can. however, be used to simulate surface bumps and wrinkles with the normal mapping technique. In fact, when doing bump mapping you need to compute the normal distortion from the gradient of the bump map pixels, and hence more or less convert the bump map into a normal map on the fly.

#5244247 OpenGL- Render to texture- a specific area of screen

Posted by haegarr on 03 August 2015 - 01:24 AM

does i still have to work with projection matrix?

Yes, you still have to do a projection. In this concern an FBO is not different from the default framebuffer. Moreover, you also need to find the suitable view matrix.


can you give me some trick how to do so and which way should i follow exactly?

As already mentioned, you need the full MVP matrix. You can take the situation as an own scene, i.e. the sheet of paper is the only object in the world. That allows to set the camera to a standard position and orientation and to place the object somewhat in front of i onto the z axist. As a result both M and V are easy to build, where V is just the identity matrix or perhaps just some scaling.


Regarding P you need to know the left, right, top, bottom, near and far planes of the view volume, all this in view space (which is, if following the above, the same as the world space perhaps with the exception of scaling). Now you can build P by either setting element by element (see e.g. here for details), or use one of the usual glm or OpenGL / GLU routines. 


Its not clear to me how far you have progressed, and which kind of projection you use. In which co-ordinate system have you calculated the red borders?


im sorry for my Novice questions.

There is nothing that need to be excused here :)