Jump to content

  • Log In with Google      Sign In   
  • Create Account


haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 02:26 PM
*****

#5168581 Rotating camera with mouse

Posted by haegarr on Today, 01:07 AM

The problem is caused by mixing absolute and relative mouse co-ordinates. In the mainLoop you read relative co-ordinates xrel and yrel, i.e. the "mouse is moved by" co-ordinates. In the Camera::Rotate routine you then subtract that from the half width and half height, resp., what hints that you wanted to use absolute co-ordinates instead. Because your mouse motion is always lower than half width or height, you always get positive value for angle calculations, and hence your camera moves always in the same direction.

 

So, try to use e.motin.x and e.motion.y in the mainLoop. This should give you absolute co-ordinates. The effect is, of course, that the position but its motion of the mouse pointer defines your camera rotation angles. If that is not your desire then follow Waterlimon's advice "remove the width/2 and height/2 parts and it might work like you intended it to".




#5168381 Advanced game structure

Posted by haegarr on Yesterday, 09:07 AM

... I thought of a Command pattern for player input and contolling enemies with an AI controller. That would let me easily attach the player actions(attack, ...) to Keys or MouseButtons. ...

The Command pattern, if you mean exactly those of the Gang of Four, is overkill in this case. It is fine to abstract input (in the case of the player character) and to "objectize" any character action, and the Command pattern goes in that direction. However, other features of the pattern are not so good in this case. You usually don't need a history of past commands (no undo, and usually no playback). You will not queue up commands, waiting for the previous ones to complete (because players will claim that your game is not responsive). You will not perform the actual action in some virtual Command::perform().

 

On the other hand, if you understand Command as a lightweight instruction for interpretation by e.g. the animation sub-system or whatever, then it's okay.

 

... But one problem I actually have,is that I want an easy way to add interaction between the player and world objects like treasures, enemies, traps, NPC´s and so on. I thought of an event pipeline(for example the player fires an event before he moves and objects like the collision layer or doors would react to that and would prevent the player from moving). So what would be a good approach to solve that "communication" problem. I thought of using the Observer pattern here as well, but I am not sure about that.

This is a real problem per se. You have to notice that the observer pattern introduces some kind of local view onto the world state. What I mean is that a single event happening in the world is seen by the observer at the moment it is generated, but it does not foresee all the other events that happen at the same simulated time but at a later runtime due to sequential processing. So the observer reacts on a world state that is incomplete! 

 

Look at the structure of a game loop and notice that it is build with a well-defined order of sub-system processing. Introducing observers often will break this well-defined order (for example the animation sub-system moves the character and the collision sub-system moves it back).

 

This is one of the examples where the problem analysis need to take into account the big picture. Observers are fine for desktop application where the entire UI is just reacting on events, but a game is (usually) a self running simulation.

 

In short: I would not introduce observers especially wherever sub-system boundaries are crossed, and I would think twice for observers in general.

 

 

Just my 2 Cent, of course ;)




#5168359 Advanced game structure

Posted by haegarr on Yesterday, 06:49 AM

... So I need a good way to structure my code, not to end in a chaos like in some of my other projects.

Software patterns do not help you in structuring your code. They are just typical approaches to recurring problems. You can implement patterns and still produce chaos. 

 

The correct way is to plan what and how you will implement. Analyze what you desire in the end, then make architectural decisions on that, then look at the demands coming of, and then, perhaps, patterns may come into play. Otherwise you try to solve problems without knowing to have them, so to say.

 

... and can tell me which [pattern] to use ...

That would be the wrong way. As said: Always analyze your problem first and then think whether one of the known patterns is a solution, and then adapt it (patterns are not strict) and/or combine it to yield in an implementation.
 
... why to use ...

Software patterns are typical solutions to typical problems. They help because one need not "invent" a solution but matches it with know solution.

 
... and mybe an example when to use.
I hoped that patterns, wherever they are described, come along with some usage example.
 
 
I know this post isn't exactly what you are looking for. However, knowing of patterns (what IMHO really is a Goof Thing) doesn't unburden you from planning. First comes "what do I want", and then "how to reach that goal".



#5167634 How to pack your assests into one binary file (custom file format etc)

Posted by haegarr on 18 July 2014 - 10:36 AM


... So how is this done, how to you package a bunch of file assests into one binary file like this ...

As Rob has mentioned, the solutions are numerous. The custom package file format I use has some header, followed by a table of content, followed by so-called load units, followed by streamable data.

 

The table of content has an entry for each single resource stored within the package. Each entry also stores the relative address and length of the load unit which contains the resource. A load unit may contain a single resource or a couple of resources. The latter is called a "bundle" and itself has a kind of table of content with back references to the package table of content. Bundles are fine for resources that should be loaded together (e.g. all the meshes and textures of a model). The resource is at least the description of what it is, but may also contain the resource data. If not, or if only partly, then the resource data or additional resource data is placed as streamable data towards the end of the package file. For example, textures may be stored with the low levels of detail embedded and the high levels of detail as streaming data elsewhere.




#5167125 obj file format

Posted by haegarr on 16 July 2014 - 05:14 AM

Using OBJ as source for GPU related APIs has at least these well-known issues:

1.) support for polygons with more than 3 vertices,

2.) usage of independent indices for positions, normals, tex-coords,

3.) is text based.

 

All of the above issues cause an essential amount of work to be done during import. Moreover, OBJ is AFAIK meaningful useable for static meshes only. With this in mind, OBJ is not a well suited format for use in game engines. It has limitations even in its use as input to tool chains. So, even if the mesh is exported with triangles only, not all problems are solved. OBJ seems to be a relatively simple format, and it actually is during the process of parsing, but it turns into headache if one wants to use the resulting mesh with  modern graphics APIs.




#5166374 Entity manager size

Posted by haegarr on 12 July 2014 - 04:17 AM


In short; do you see reasons why to have both an entity manager and renderqueue that are able to add new game objects (entities) / mesh instances (renderqueue)?

A render queue is usually a thing where renderables that are actually to be rendered just now are collected. It is filled with rendering jobs (to use the most abstract term here) resulting from iterating the scene and passing visibility culling. The queued jobs will be sorted by some engine dependent criterions. It will be re-build for the next loop run (if we let coherency mechanisms aside). As such, a render queue doesn't know what game objects are, and hence there is no meaning in comparing it with the entity manager. This is decoupling in action, so to say.

 

IMHO "new game objects" defines a game object that is added to the scene. For sure this has an impact on the game object management, but it should not have an impact on the render queue. If the object is in the scene, and it passes visibility culling, then it is (indirectly, but nevertheless) added on-the-fly to the render queue.

 

Do you have another definition of render queue? 




#5166356 (Super) Smart Pointer

Posted by haegarr on 12 July 2014 - 02:26 AM

IMHO it is a design flaw if the deletion of a scene object is used to "steer" AI. If an object is dead or destroyed w.r.t. the game mechanics, then it is still existing in the scene. It may have to play a role (ruin, corpse, for looting, ...), some of them already mentioned in posts above. The target should be in the scene at least as long as any other game object is referring to it, simply because it is "in use" if being referred. Deletion is a low level mechanism; it could be run if higher levels are done with the object.

 

Running the AI every now and then has to be done anyway. What if the marines are attacked after they begun raiding the building? Do they react only after the building is destroyed? Hardly, I would say. So the regular checking for "is the currently followed plan still valid" will detect the destruction early enough. Hence there is also no real need to notify the marines as soon as the building collapses. It would look even artificial if 20 marines stop firing in the same millisecond.

 

AI need not be run on every frame for every unit and in full depth. Using a layered approach allows running the short tests more frequently, and computing higher layers only if the lower one has failed or finished. Also, ways exist to reduce the AI computations of a party by running full AI on a leader only, and letting it more or less directly control the other party members (this doesn't mean there must be an officer; the leadership can be as abstract as a concept of the party itself).




#5165195 Animating With Separate Image Files

Posted by haegarr on 07 July 2014 - 12:51 AM


I changed the position variables to integer values, and the character still won't move.

What Lactose! wrote means that the casting to int makes the value of the effective speed to be zero, and multiplying with zero gives zero, and adding zero does not change anything:

 

1.)  const float MOVEMENT_SPEED = 0.10f;

=> MOVEMENT_SPEED = 0.1

 

2.) gAidan.WalkingDstRect[gAidan.CurrentFrame].x -= (int)gAidan.X_Vel * (int)MOVEMENT_SPEED;

=> (int)MOVEMENT_SPEED => (int)0.1 => (int)round(0.1) => (int)0 => 0

=> no change, because gAidan.WalkingDstRect[gAidan.CurrentFrame].x -= 0 

 

What you need to do instead is:

With gAidan.X_Pos being a float, and gAidan.X_Vel being a float, and MOVEMENT_SPEED being a float:

 

1.) updating position (notice that only floats are used, so above problem is avoided)

gAidan.X_Pos -= gAidan.X_Vel * MOVEMENT_SPEED;

 

2.) later, for rendering

gAidan.WalkingDstRect[gAidan.CurrentFrame].x = (int)gAudan.X_Pos;

 

BTW: With small enough numbers the integer value of X_Pos may change after a few frames only, but that would be totally correct.




#5164847 Old Keyboard State in Input Handler

Posted by haegarr on 05 July 2014 - 02:07 AM

Any call to SDL_PumpEvents causes an update of the internal key array where keyboardNew is referring to, be it directly or indirectly by SDL_PollEvent or SDL_WaitEvent. Without knowing whether or not you call it elsewhere … if you do then the array keyboardNew may be updated more frequently then you wish. That means that when you invoke memcpy, in keyboardNew is already a state you never seen before, and especially you have not saved in keyboardOld. So your code is able to detect only those few changes between the most recent accidentally made SDL_PumpEvents and your intended SDL_PumpEvent. As said, we have not enough information to confirm that, but yellowsputnik's test seems to confirm at least that the problem occurs only in a wider context.

 

To avoid such a kind of problem, you should have two buffers, keyboardOld and also keyboardNew, and fill both by memcpy from keyboardSDL (which was formerly named keyboardNew). Do it like in the following scheme:

Input::update() {
    std::memcpy( keyboardOld, keyboardNew, length );
    SDL_PumpEvents();
    std::memcpy( keyboardNew, keyboardSDL, length );
}

Of course, to avoid the 2nd memcpy, you can do pointer swapping if you wish.

 

Buckeye's last post goes in a similar direction, but with the solution above you have more control of the state of the buffers.




#5164240 Designing a [Minecraft] RPG system

Posted by haegarr on 02 July 2014 - 12:47 AM

With the mages I meant that it is often a class (used very commonly as well) and it would be confusing to have them as a race. You probably think of a race of mages who are born with magic like in Harry Potter, but personally I would invent another word for it, just to make it clear that the people with magic is a race and not a class.

Seconding, especially since in the OP

There are three races the player can pick from: Elf, Mage and Dwarf

and

This system is nice just because it allows a player to want to be a "Tanky Dwarf" at the beginning, then switch to a "Fire Mage" afterwards.

means that the player would be able to switch the race. Although flexibility is fine, that possibility would counteract the identification with the PC, IMHO.




#5163832 How does resolution work?

Posted by haegarr on 30 June 2014 - 08:12 AM

The keywords are:

* multi-resolution images,

* resolution independent co-ordinate system,

* letter/pillow boxes.

 

Sprites can be available at different resolutions, and the set of sprites that best matches the given platform is chosen at the beginning. Also, don't think that 1:1 pixel mapping between the sprite and the screen is set in stone.

 

Regarding to sprite placement, collision detection and such: Pixels co-ordinates are bad for these purposes. Instead, use a virtual and resolution independent co-ordinate system. Map this to pixels lately during rendering.

 

Use window relative co-ordinates for the placement of GUI elements. E.g. the screen height is fine for normalization, so that the screen relative co-ordinates range from 0 to the aspect ratio horizontally and from 0 to 1 vertically. Further, allow for anchoring, so that GUI elements can be related e.g. to the left / center / right window border horizontally, and to the top / center / bottom border vertically. I personally do this by having 2 values per dimension, one that specifies the anchor position relative to the width or height, resp., and another that specifies an offset in relative co-ordinates (this time relative to the height for both dimensions), so that the anchor can be specified anywhere in the window.

 

The aspect ratio is the only real problem, if your playground is not allowed to be more visible for one player than for another due to competition reasons. In such a case you should work with pillar/letter boxes. Those don't need to be of ugly black but can be filled with some nice background.




#5163786 encounter weird problem when turn on the color blending.

Posted by haegarr on 30 June 2014 - 02:45 AM


Now it seems like not that easy to solve this issue. Some of you guys recommend me to resort all objects in a proper way, ...

Yep. Or else you try depth peeling as is suggested above by LS.

 


... but even if only a single organ displayed in the scene, the problem still remains. ...

Maybe this is because of the already mentioned concavity of the meshes? Question is, if you simplify the scene down to a single organ, do you have a chance to notice whether the problem occurs only if you look through a concavity. If so, then we are on the right track when suspecting the drawing order. A solution then will be to use sub-meshes.

 
But if concavity is not the cause, then we need to investigate further.
 


… And the thing is, in my case, each object is mapping to other stuff, resorting means changing everything. It does take time to solve this.  ...

This isn't a problem solely related to yours. It is common in game engines and elsewhere. And hence there is a solution :)

 

It is possible to have more than a single order on objects. Notice that it is recommended to have several organizing structures, one for each task to do. It is absolutely fine to have a logical organization of the objects, a spatial organization (if collision or proximity is an issue), a render order, and perhaps more. Don't stuck with the über scene graph approach, or you will be lost sooner or later!

 

For example, you iterate the scene description and detect all objects that need to be rendered. You insert each object into a list (which is emptied before the scene is iterated). After finishing, you sort the list by some criterion, in your case using the distance from the current camera. Object rendering then is done in the order given by the list. So rendering has no influence on other aspects of object organization, and nevertheless is done in the required way.

 


I think Ohforf sake is right, Transparency is not as simple as enabling color blending, maybe this is the key point here!

Absolutely. 




#5163773 Asset file format

Posted by haegarr on 30 June 2014 - 01:18 AM

Usually there are 2, perhaps 3 situations to be distinguished:

(1) "Raw assets" as are loaded by an editor for the purposes of being viewed and perhaps integrated into the project,

(2) "runtime assets" that are loaded by the game engine,

(3) and perhaps "runtime assets" that are loaded by the game engine, either provided for hot-swapping, software updates, or modding.

 

Raw assets can be stored in the native file format of the DCC or in a common interchange file format (e.g. Collada for meshes), obviously to have a chance to re-load it into the DCC and make changes. These assets are further given by individual files, although a single file may provide more than one asset.

 

Runtime assets are usually provided in a game engine specific file format. The purpose is to yield in fast loading, which means a binary format because it is more compact but also requires less pre-processing in the game engine. The tool-chain is responsible to convert from raw assets to runtime assets.

 

With respect to the normal game play runtime, one don't want to have to open one file per asset. This is because file opening costs time, and having many more or less small files consumes more storage footprint. The solution is to use some kind of package file format. It further allows assets to be sequenced which in turn allows for shorter load times.

 

Now, with respect to hot-swapping, updates, and/or modding, storing assets in a single big package is bad, because replacing an asset in a package is often painful. A solution is to use the same file format but allowing for single, additional files so that the content of these files override the corresponding content of the regular packages. The runtime asset loader can handle this for example if the overriding asset files are stored in a dedicated file directory apart from the main asset files.

 

With packages the question of different file suffixes for runtime assets vanishes obviously. Further, it it easier for the runtime asset loader to handle a single file suffix. It has to investigate the file content anyway. So for me using a single suffix for runtime assets is a good way.

 

Just my 2 Cents.




#5163712 Entity component system, component collection?

Posted by haegarr on 29 June 2014 - 03:26 PM

As ever with ECS: There is no single way...

 

When you say "controlling component" you refer to an actual component, or a subsystem?? For what I get you are using a subsystem to refresh current sprite/animation, something like I said in the 3rd solution. If an entity has a group of animations, the groupOfAnimationSubsystem will change the reference to the current animation depending on player state (running, jumping, etc.); if an entity has a group of sprites, the groupOfSpritesSubsystem will change the reference to the current sprite.
But what if you want to render 2 sprites for the same entity? I can't see a clear way of doing that, I always end up with a bigger component, something like "currentGraphics" that can contain more than 1 sprite and/or animation (or other effects).
By the way, you say "the pool of possible sprites is part of the controller", so that controller will be a component? Something like SpriteListComponent?

It depends on how the ECS is implemented. Components may be real objects and have behavior implemented, or else they may be data containers stored in sub-systems, or they may be descriptions for data in sub-systems. However, it does not really change the concept of what I've written above.

 

For what its worth, here is a more complete description: In my implementation the attachment of a SpriteComponent with an entity causes a Sprite element to be reserved within the SpriteServices (a Services is my implementation of a sub-system). A Sprite element refers to a SpriteResource where the actual image data is available. A SpriteController is a Component that defines a behavior, namely a kind of dynamics on top of a Sprite element. It causes an installment in the SpriteServices. This thing is defined not to target a SpriteComponent but a Sprite element. Doing such a thing means that instances of the same type of elements inside a sub-system may come from different sources, perhaps but not necessarily all different components.

 

Yes, the controller is a component, because in ECS you want to define that and how the sprite of an entity will be dynamic. A different kind of dynamic control means a different type of controlling component. However, I usually do not use something like a SpriteListComponent, because this would be a too generic component. As mentioned above, the actual sprites are all available as SpriteResource. The sprites in the world are available as Sprite elements in the sub-system. What any sprite control requires is (a) a reference which Sprite element to alter (i.e. which Sprite element belongs to the original SpriteComponent of the entity), and which SpriteResource to be set into the Sprite element under which situation. This is definitely more specific than a SpriteListComponent.

 
Coming now to the problem of 2 sprites for the same entity. I do not support this. If one wants to have 2 sprites coupled in some way, it has to be expressed explicitly as relation between 2 entities. This requires an explicit ParentingComponent which, you guess it, causes a Parenting to be installed in the SpatialServices sub-system. A Parenting uses 3 Placement instances: It targets the Placement which originates from the PlacementComponent of the current entity (the own global placement), the Placement which originates from the ParentingComponent (the own local placement), and the Placement linked by the PlacementComponent (the global placement of the parent). BTW, here you have another example of the concept where the same kind of sub-system elements come from different types of components.
 
Of course, you can define the SpriteComponent and hence Sprite element with the ability to have more than one sprite. However, this means 2 references to SpriteResource instances, 2 placements (because there is ever a spatial relation, too), perhaps 2 animation controllers, … IMHO this is better solved with 2 entities as described above.



#5163609 sse-alignment troubles

Posted by haegarr on 29 June 2014 - 06:52 AM


for each traingle (lets call him abc - it has vertices abc) I
need to cross  and normalize to get the normal ,

Presumably (but I'm not an SSE expert, so someone may contradict me): The best performance for such a problem comes with a memory layout where each SSE register holds the same components of 4 vertices. I.e.

uint count = ( vertices.length + 3 ) / 4;
__m128 verticesX[count];
__m128 verticesY[count];
__m128 verticesZ[count];

Fill the arrays with the data of the vertices a, b, c of the first 4-tuple triangles, then of the second 4-tuple of triangles, and so on. In memory you then have something like:

verticesX[0] : tri[0].vertex_a.x, tri[1].vertex_a.x, tri[2].vertex_a.x, tri[3].vertex_a.x 
verticesX[1] : tri[0].vertex_b.x, tri[1].vertex_b.x, tri[2].vertex_b.x, tri[3].vertex_b.x
verticesX[2] : tri[0].vertex_c.x, tri[1].vertex_c.x, tri[2].vertex_c.x, tri[3].vertex_c.x
verticesX[3] : tri[4].vertex_a.x, tri[5].vertex_a.x, tri[6].vertex_a.x, tri[7].vertex_a.x 
verticesX[4] : tri[4].vertex_b.x, tri[5].vertex_b.x, tri[6].vertex_b.x, tri[7].vertex_b.x
verticesX[5] : tri[4].vertex_c.x, tri[5].vertex_c.x, tri[6].vertex_c.x, tri[7].vertex_c.x 
...
verticesX: analogously, but with the .y component

verticesZ: analogously, but with the .z component

 
Then computations along the scheme
dx01 = verticesX[i+0] - verticesX[i+1];
dy01 = verticesY[i+0] - verticesY[i+1];
dz01 = verticesZ[i+0] - verticesZ[i+1];
dx02 = verticesX[i+0] - verticesX[i+2];
dy02 = verticesY[i+0] - verticesY[i+2];
dz02 = verticesZ[i+0] - verticesZ[i+2];

nx = dy01 * dz02 - dz01 * dy02;
ny = dz01 * dx02 - dx01 * dz02;
nz = dx01 * dy02 - dy01 * dx02;

len = sqrt(nx * nx + ny * ny + nz * nz);

nx /= len;
ny /= len;
nz /= len;

should result in the normals of 4 triangles per run.

 


then i need to multiply it by model_pos matrix

Doing the same trickery with the model matrix requires each of its components to be replicated 4 times, so that each register holds 4 times the same value. It is not clear to me what "model_pos" means, but if it is the transform that relates the model to the world, all you need is the 3x3 sub-matrix that stores the rotational part since the vectors you are about to transform are direction vectors.






PARTNERS