Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Oct 2005
Online Last Active Today, 12:21 PM

#5174447 How to react on KeyPress + Release?

Posted by haegarr on 18 August 2014 - 08:25 AM

After the first run through display() both keyStates and keyPreviousState point to the same memory area. You probably want to use memcpy to copy the memory area pointed to by keyStates to the memory area pointed to by keyPreviousStates, but actually you copy the pointer keyStates to keyPreviousStates.


EDIT: To avoid such problems and clearly denote the situation, you should use const to make the variables read-only, in this case

bool* const keyStates = new bool[256]();
bool* const keyPreviousStates = new bool[256]();

Notice that this does not make the array elements read-only but just the pointers themselves.

#5173902 Help with Compilation Error: 0:2

Posted by haegarr on 15 August 2014 - 09:15 AM

That should not be complicated to find out using the debugger.



@OP: Here is a how-to (actually done without debugger, but using the debugger would have made investigations even easier; you should try it):


1.) With the hints given in above answers, you should be able to find the file within the C++ sources that is used to compile the shaders.


2.) If you investigate that CPP, you'll see that shader scripts are assembled into string variables. You'll see further that most of the shader scripts originates from the "content/shaders/" subfolder. Therein are 2 include files functions.si and header.si, a vertex shader pass.vs, and a fragment shader model.fs.


3.) Looking into the shader files, you'll see that there is a line "#version 130" which denotes the version of GLSL that the shader compiler should use.


4.) Looking at the error text, the keywords mentioned there are "precision" and "default qualifier", which both are missed.


5.) With this knowledge you visit www.opengl.org (or alternatively www.khronos.org) and navigate to the OpenGL specifications section. There you locate the GLSL specification that matches "#version 130", which you'll find as GLSLangSpec.Full.1.30.10.pdf. Download it, open it...


6.) ... and search for the keyword "precision". You'll find it explained in section 4.5 and especially 4.5.3 "Default Precision Qualifiers". At the end of that section you can read:


The vertex language has the following predeclared globally scoped default precision statements:

           precision highp float;
           precision highp int;

The fragment language has the following predeclared globally scoped default precision statement:

           precision mediump int;

The fragment language has no default precision qualifier for floating point types. Hence for float, floating point vector and matrix variable declarations, either the declaration must include a precision qualifier or the default float precision must have been previously declared. 


This leads to the conclusion that your fragment shader uses a float type but does not declare any precision for it.


7.) So ... going back to model.fs and see "yep, that's actually true".


8.) Correct the irregularity and try again.



Perhaps this isn't the actual solution, but anyway should demonstrate how to attack such a problem.

#5172278 Good render loop organization resources

Posted by haegarr on 08 August 2014 - 08:28 AM

The question you're asking has little to do with OpenGL itself. It is further a question with many possible answers. The following is one of them...


First of, there is not really a "render loop". Instead, (graphical) rendering is the very last step in the so-called game loop. Before rendering there are updates on input processing, AI, animation, physics, collision detection and correction, and perhaps others. This coarse layout of the game loop is usually understood as a sequence of so-called sub-systems. In this sense rendering is a sub-system.


When looking at the various sub-systems, question is how they all work. Is it possible that all of them have the same requirements? Unlikely! Instead, for different tasks during processing the game loop there are different data structures that are suitable to perform the particular tasks. This means that a data structure like the classic scene graph is probably not suitable.


I write this because the scene graph approach is often taught in books, and it seems me to be the case here, too. A scene graph is a single structure and looks promising on the first look, but it tends to become a nightmare the more tasks are tried to be solved with it. You asked for "high-performance", and such a scene graph does not belong to the same set of tools. This does not mean that scene graphs are bad per se; if the scene graph is used for a single purpose then it is as good as another structure.


Now, with respect to rendering the above thought has several implications. As can be seen from the sequence of sub-system processing, all the game objects must already be placed properly in the world, or else collision detection and correction would not have been done meaningfully. That means that "chaining of transformation matrices" is absolutely no thing of rendering at all. Instead, the process of rendering can be seen as follows:


1.) Iterate all objects in the scene and determine which are visible.


2.) For all objects that passes the visibility test above, put a rendering job into one of perhaps several lists. Here several lists may be used to a-priorily distinguish between opaque and non-opaque objects, for example. Such a rendering job should hold enough informations to later on let the low-level rendering do what it has to do.


2a.) Skin meshes may be computed just now, i.e. after it has been determined that they are visible.


3.) The lists will then be sorted by some criteria, e.g. considering the costs of resource switching (texture, mesh, shader, whatever) and, in the case of non-opaque objects, especially their order w.r.t. the camera view.


4.) The low-level rendering then iterates the sorted lists in given order, uses the data from each rendering job to set-up the rendering state (blending mode, binding textures, VBOs, shaders, ..., as much as needed but as less as possible) and invokes the appropriate OpenGL drawing routine.


You can see from the above again that OpenGL itself is not in the foreground, even we are directly discussing rendering now.


The question for rendering passes is the question for which kind of rendering you want to implement. Forward shading, deferred shading, ..., which kind of shadowing algorithm you want to use, and whether you want to support non-opaque objects. Besides this, each rendering pass is more or less the same as described above but obviously with different set-up and rendering states.


Organization of game objects can be done in various ways. However, from the above it should be clear that different aspects of game objects should be handled differently. A generally good approach is to prefer composition of game objects (instead of god classes or a wide spread inheritance tree).



Well, all this is perhaps not want to wanted to read, and I know that it is mostly vague. However, you must understand that a full fledged solution has many many aspects, and discussing them in a single post (or even thread) is not possible. This is, by the way, a reason why books tend to suggest the usage of scene graphs. It can also be understood as a hint for beginners to keep with the scene graph approach for now. In the end it's up to you to think about which way you want to go. However, decoupling things makes re-factoring easier. Decoupling is at least something you should consider.


Looking out for your answer ... ;)

#5171941 fullscreen triangle

Posted by haegarr on 06 August 2014 - 02:22 PM

with (p.x,p.y) being the x-y co-ordinates of gl_Position and (t.x,t.y) those of out_texture

from: gl_Position = vec4(out_texture * vec2(2.0f, -2.0f) + vec2(-1.0f, 1.0f), 0.0f, 1.0f)


you have currently: ( p.x , p.y ) = ( t.x , t.y ) * ( +2 , -2 ) + ( -1 , +1 )

but you want y being negated: ( p.x , -p.y ) = ( p.x , p.y ) * ( +1, -1 )

hence: ( ( t.x , t.y ) * ( +2 , -2 ) + ( -1 , +1 ) ) * ( +1, -1 ) = ( t.x , t.y ) * ( +2 , -2 ) * ( +1, -1 ) + ( -1 , +1 ) * ( +1, -1 ) = ( t.x , t.y ) * ( +2 , +2 ) + ( -1 , -1 )


gives you: gl_Position = vec4(out_texture * vec2(2.0f, 2.0f) + vec2(-1.0f, -1.0f), 0.0f, 1.0f)

#5171604 Rendering OSD text with good quality

Posted by haegarr on 05 August 2014 - 06:22 AM

As long as "with color" just mean unicolored, try "distance field font" in a search engine.

#5171567 How do I handle particles (e.g. bullets)?

Posted by haegarr on 05 August 2014 - 01:38 AM

@Servant: Really concrete answer smile.png


Duríng skimming through the code, I saw a small conceptional flaw here:

         else if(bullet.state == Bullet::Exploding && framesToUpdate > 0)
               bullet.frame += framesToUpdate;
               if(bullet.frame >= Bullet::ExplosionFrames.size())
                    //Remove the bullet when its explosion animation has ended.
                    bullet.state = Bullet::Dead;

IMHO it should look so

          else if(bullet.state == Bullet::Exploding)
               bullet.frame += framesToUpdate;
               if(bullet.frame >= Bullet::ExplosionFrames.size())
                    //Remove the bullet when its explosion animation has ended.
                    bullet.state = Bullet::Dead;

or else you suffer from a problem: If the new frame comes too fast compared to DelayBetweenFrames, then framesToUpdate is 0, the branch is skipped, and the branch performing the sprite movement is entered instead. The inner part of "Exploding" works well even if framesToUpdate is 0 (I think we can neglect to check for negative values here, or insert an assertion is wanted), so no inner condition needed.


#5171372 Where to store container of all objects/actors/collision models/etc

Posted by haegarr on 04 August 2014 - 03:09 AM

Do I just never actually talk to the Tank, Jeep, Chopper, etc classes directly? So I have a container of Actors, and container of MoveLogics, a container of CollisionObjects, etc. and I just iterate through those regardless of who actually owns them? In that case, again how do I synchronize data between the Actor, MoveLogic, CollisionObjects, and any other members that hold similar data?

There is no single right way. So the following is just one possibility...


Remember the mention of sub-systems above. That are places where some logic is implemented. It leads away from the object centric view to a task centric view, so to say.


For example: In an object centric view of things, when a collision between objects A and B is given, asking object A for colliders returns B, and asking object B for colliders returns A. That looks like 2 collisions, but is actually only one. In a task centric view of things, a sub-system is asked for collisions, and it returns { (A,B) }.


This could be done with a free function. However, wrapping the task into a class allows to associate management structures that are especially suitable for the task. In the case of collision detection it is known that only dynamic objects can collide. So, if the sub-system has two separate lists, one for dynamic objects and one for static objects, and objects that do not participate on collision (e.g. bird's of a flock) are not represented at all, then processing of collision detection can be done in a more efficient manner.


The fact that a game object is to be considered by a particular sub-system is encoded by components. Components define the how and why a game object participates.


This is the fundamental idea. It need to be fleshed for implementation, of course.


Components like CollisionVolume or Placement are data components. They have a value but no logic. Several sub-systems can refer to the same data component. E.g. the Placement component is used by all sub-systems that need the position and/or orientation of the game object. Other components may define logic, for example as extension to sub-systems, self altering data component values. This way means that for example Placement is needed exactly one for every game object that needs to be placed in the world. Synchronization is done by processing the sub-systems in order, i.e. choosing a suitable order of update() calls inside the game loop.


It is possible to duplicate components in several sub-systems, too (although I do not recommend this in general). There is no principle problem in copying component values as soon as logically previous sub-system runs are completed (which is automatically guaranteed due to the order in the game loop).

#5171055 Camer Shake Effect when big scary monster comes charging

Posted by haegarr on 02 August 2014 - 02:32 AM

The entire routine is confusing me. When I read "shake effect" I think of an offset added temporarily to the current orientation. Instead, the routine uses a multiplication, and it does so in a permanent manner. This seems we the wrong attempt.


Some issues in detail:


1.) Accordingly to this page, if you're using the standard C++ rand() routine, then rndJerkiness = ((rand() % 2) - 4) gives you values from the set { -4, -3 }. If you want the set { -2, -1, 0, +1, +2 } then you need to use ((rand() % 5) - 2) instead. Notice that the rndJerkiness is still whole-numbered. That may be okay because you want "jerkiness", but it does not suit the goal very well (see solution suggestion below).


2.) You're initializing shakeAmount = cameraYaw, and cameraYaw is (concluded from the name) an absolute angle. Then you compute the shakeAmount by adding cameraYaw again, and you add rndJerkiness. So the shakeAmount depends on the original view direction.


3.) The 2 conditions to restrict shakeAmount (at least I assume them to be intended to do so), will not work correctly due to issue 1.


4.) You multiply cameraYaw with shakeAmount. If cameraYaw is zero, i.e. the camera looks forward, then the multiply has no effect. If the cameraYaw is e.g. 90 degree then shakeAmount has a much greater impact compared to when the cameraYaw is e.g. only 45 degree. So, instead of just shaking the view, you rotate it dependent on the current view direction.



IMHO, a more suitable implementation would be:


* Store the "unshaked" cameraYaw in a camera local variable, e.g. cameraYawLook.

* Compute shakeAmount as a variation of an angle. Because you're probably dealing with radian, use a range like for example [ -15°/2pi , +15°/2pi ].

* Add shakeAmount to the current cameraYawLook and set this as current cameraYaw (from where to compute the view transformation).

#5169741 Handling information delivery in editor mode

Posted by haegarr on 28 July 2014 - 07:24 AM

Certainly, I also attached a screenshot of what the whole things looks like. In the "event"-view window in the upper right corner of the screen, you can see what a script looks like. In the upper right of that window is a list of variables. The selected variable "Text" is marked public. As you can see in the lower left corner, the seleced entity has a component "Event" which references the shown Event, therefore all public variables, in this case only "Text" should be shown - with a value specific to that component. This does work in that case, but only because "Text" was already declared at load time. In case another public variable is added, it should also be shown in the view. Which wouldn't be all that complicated, except that this is the entity/component-view, which per se has nothing to do with the script/event. So the question really is not how to display the variables, but how to draw the connection between script - variables - component. Any good ideas for that?

Let's see ... ;)


The editor is instructed to make a script variable public. This action is not reflected by the said view, because the view observes the state of the component, and the fact that another variable is made public is not yet seen by the component. So we need a mechanism that tells the component to add another property to self, matching the new public variable within the script. Right?


(1) Observation.


All script components register self with the notification center for being interested in changes to their script (i.e. the script name can be used as "subject"). The editor tells the notification center of the event "interface changed" with the script's name as subject. All belonging components react accordingly by adding a new property, and sending an event "state changed" with self as subject. The view, itself registered with the notification center, of course, reacts accordingly.


While this is all good w.r.t. OOP, it introduces editor stuff into the components which by itself should be runtime related objects. So this solution mixes responsibilities. I would not do so.


(2) Direct editing.


The editor known of the possible side effects of making a variable public, and works accordingly. It iterates the pool of components / the pool of script components / the pool of script components belonging to that particular script (however you manage the components), and instructs all matching components one by one to add a new property, and it causes the notification center to send a "state changed" event for each component.


In this solution there is no dependency of script components to editor stuff. They are just "under editing" together with their script, and the responsible unit is the editor. The solution is pure and direct.


One may argue that the editor does both editing the script as well as editing the component. If you are of the same meaning, then (3) may be your favorite.


(3) Observation by mediators.


Instead of registering the components themselves as observers as in (1), mediator objects especially for maintaining script components could be used. So adding the property is executed by the mediator onto the script component whenever it detects that an incoming "interface changed" event means a new public variable (send by the editor like in (1)).


As a variant only a single mediator object could be used (which has to iterate all belonging script components, similar as in (2) done by the editor itself). This would require just a single additional object but has the additional advantage that the mediator can be registered with the notification center on start-up, and need not be touched ever again.


Also here the script components do not depend on editor stuff. The mediator is an object that encapsulates the knowledge of side effects of editing the script. As such it is a kind of extension to the editor.


This solution is more flexible than (2) because in case that editing scripts may have side effects also onto other objects (besides script components), you just need another kind of mediator (or perhaps pure observer if the other objects belong to editor stuff). However, it is also a bit more complex and requires a little more extra work, because parts of editing are "outsourced", so to say.



I think that personally I prefer solution (3) with its variant of a single mediator. Whatever you do, your debug build should make a check inside the script components whether their set of variable properties matched the set of public variables.


Ok, that makes sense - a nullptr-check in case of a universal DLL for the game shouldn't be too hard though, especially since there should rarely ever be a change worth of notification on a variable in game mode - I'm still glad to see that this is a plausible technique for handling editor stuff, I also did that for storing load-information for textures, meshes etc... by having a seperate "LoadInfo"-struct that would only be set in editor mode. I always considered this a bit cheap and ugly, but seems its OK after all.

A notification center is there from the beginning or it is never. It doesn't disappear unexpectedly. So plain pointers are sufficient. And using null as "absent" indicator is fine anyway.


Its basically a variable object that the script instance holds on to, and whose value can be manipulated from outside.

So the script holds the bunch of variables which means it is somewhat central (opposed to being distributed to a couple of script nodes), then let the component call the script's presetFrom(map) method with the argument being the map of presets, so that the script can iterate its local map and copy values from the preset map into it. 



BTW: From the screenshot you seem me to make a good job :) Keep on.

#5169709 Handling information delivery in editor mode

Posted by haegarr on 28 July 2014 - 04:57 AM

The problem with this is, that its not just about displaying the actual variables & values of the script itself, but of the script as it is being used on a component, and furthermore being able to serialize the variables of said component. Its actually that link between component & script that is causing most of my troubles, I already have view for the variables of a script in the script editor.
So the thing in my system is, all scripts are stored in a global cache. The component gets a name key to a script, and via lazy initialization requests the script instance when the components update code is run. This does however only happen in "game" mode, or in the actual game itself - obviously I don't want anything updating and running while working in the editor. So what that means is that during "editor" mode, the component has no idea about what actual script belongs to it. But it needs to somehow get informed about the variable changes of a certain script. Do you know any good way to handle this?

This confuses me. Could you please describe what should be shown by the view in which situation?


Well I am rolling the GUI myself, with rendering, event dispatching etc... all in my hand. I obviously implemented it in a retained fashion, but is there anything I could do here to make such things easier, as you hinted at?

The value shown by a widget is usually a copy in the private use area of the widget, and it is copied back on some "okay" but dropped with the widget on some "cancel". A GUI like the one we're discussing is intended for tweaking, perhaps even for debugging. Instead of copying a value forth and back, it may get a pointer to the variable and read / write it directly. Whether or not this is easier depends a bit on how the environment works.


I haven't really dealt with more complex build system, so its not quite clear to me what you mean with "out-sourcing" here? Also, how would I go about only bulding the "complex stuff" in case of the editor? I have a specific editor exe, but the "runtime-libary" is so far the same for game & editor. So are you only talking about the stuff that gets built with the editor-exe, or are you suggestion having some defines for a specific editor-dll?

I mean that the notification stuff is as most as possible implemented outside of the objects that handle scripting in the runtime part of the engine. E.g. registering of observers / listeners and dispatching of events can be concentrated in a NotificationCenter objects, so that script nodes / scripts / script processors / script components (whatever it will be in the end) need just a single pointer to the NotificationCenter and invoke it if it is not null.


When the editor starts then it instantiates a NotificationCenter and makes it public to the script management. When the player starts then it does not instantiate the center, and the pointer within the script management is left null. The script management, whenever it instantiates a new script, forwards its current pointer, be it valid or null.


Whether this is done by conditional code, or is done in an editor specific DLL, or is done by static linking only into the editor EXE is a question of taste.


EDIT: Also, one thing that I forgot to mention, for said reasons I also need a way to store variable values in the component outside of the actual script instance. You know, so I can load and save them independately, etc... how would you do that?

It depends on how public variables of a script are managed.


Is there an external Variable object where the belonging script nodes refer to? In that case a centralized management would do.


Or has each script node its own internal storage for that? Then iterating and calling a ScriptNode::presetFrom(std::map) may be used.


FWIW: My own scripting system is, as most parts of my engine, data driven. This means bytecode in this case, which is executed by a script processor. It uses a "blob" at runtime, essentially a bank of registers for timers, variables, and so on. Instantiation is done by cloning a prototype of the blob (not by copying the script bytecode itself). Post-initialization means to override a couple of registers with registers stored elsewhere (the component in your case). This is probably totally different than your way.

#5169268 How to allow player to walk in a Mesh?

Posted by haegarr on 26 July 2014 - 03:49 AM

so the walking in a terrain feature should be delegated to the physics engine?

There are several sub-systems involved.

a.) AI (for the NPCs) for path finding: What way to go to reach the target location? As frob has mentioned, this can be solved by using a dedicated mesh.

b.) Animation blend controlling: Set by AI or player input, which kind of motion is used (walking, running, ...).

c.) Animation playback: How is the character posed at time.

d.) Collision correction: reacting on obstacles and ground.


Details depend on the kind of game and character implementation, of course. Assuming a 3D world with skeleton based characters...


Things are relatively easy if you are willing to accept some artifacts like foot sliding and averaged slopes and steps in terrain. It is not easy to get rid of those issues. A locomotion sub-system for foot placement is one possible undertaking. Foot sliding alone can also be reduced to a minimum if the animation playback is not controlled by time but by the made way; e.g. the Wolfire engine follows such a approach. So, for completeness we need to add:


e.) Locomotion control: foot placement and pose controlling.


However, most games especially in the hobby area do not deal with such complex things like locomotion.

#5168873 Handling information delivery in editor mode

Posted by haegarr on 24 July 2014 - 08:48 AM

It is not necessary to give each possible change an identity. Due to the fact that exposed variables of a script can occur in any combination of count, names, actual types, order, and whatnot, a GUI view onto them has to be implemented in a dynamic way. The set of variables is like the data model, and rendering the view means to investigate the set with respect to all aspects mentioned above, so the view needs perhaps to be adapted not only with respect to values but also to its structure.


With this mechanism just a single notification is needed, which is send on any change. It can even be send only at defined moments, e.g. whenever the script completed, is paused, or has ran for some time, so that the notification is not send by the script but by the script processor.


You need not be so restrictive though. You can decide to send some kinds of notifications (e.g. value(s) changed and structure changed) but still for the entire set of variables of a particular script. Or ...


Obviously, retained mode GUI's are not well suited for this kind of solution, simply because the proposed GUI mode reflects a data model state directly instead of a "canned" version. This is a flaw you have to live with in case that you've build your editor GUI with standard OS elements (you probably have). In such a case I suggest to create a mediator that is notified and that manages the view's update accordingly.


Additionally, you can implement a notification center where all notifications are send to, and where notifications are dispatched to observers. This would mean a single pointer and conditional method call for the script, script node, or the script processor (in dependence on how you execute scripts), and the "complex" stuff is out-sourced and build only in case of the editor anyway.

#5168591 Interactions between game objects in a RPG

Posted by haegarr on 23 July 2014 - 02:11 AM

It's me again ;)


I read a very interesting post about that. There was a event system explained: For example if I the player wants to move he fires an event BeforeMove(Vector2 desiredPos) and the tile map and obstacles react to that event in their own class or so and if they collide with the player they set isPlayerAllowedToMove = false. So the player can´t move that frame. Or for example if I want to implement traps, I could create a event inside the trap class which fires an event, lets say DamagePlayer(int damage) and the player has a way to react to that(by loosing life). Is this a good approach, or do I couple things even more with that(I am not quit sure). 

In the described implementation the player's game object does not know about terrain (i.e. "walkable" or "blocked") or traps. That is decoupling. Question is now, how is their response coded?


When "isPlayerAllowedToMove = false" is an action that accesses a field Player::isPlayerAllowedToMove then the terrain and traps need to know that "Player" is a "moveable", in the case of traps even that it is a "vulnerable". And what happens if a monster gets trapped?


Well, the original event BeforeMove is obviously one that deals with movement. It seems me okay to let it either have a boolean field "isBlocked" that will be set by the terrain and gets interpreted by the original sender, or else it has a pointer back to a Moveable which provides a Moveable::cancelMovement() routine that is called by the terrain. Because also monsters can be handled this way, it is more generic than "isPlayerAllowedToMove = false". The latter of the both approachs, however, may lead to let your game objects inherit many interfaces. The former one is even more decoupling.


Now coming to traps. Here causing a damage is something that is not meaningful within the movement event itself, so the idea of generating a new event is fine. However, sending it as DamagePlayer is IMHO again too specific. Instead, with using the sender of the original message and generating an TakeDamage(origSender, damage) would be better.


BUT: You need to take into account what I've written in your previous thread: Order! You should not use BeforeMove(sender, where) within traps to generate a Damage(origSender, damage) event because the trap cannot foresee whether the terrain will eventually block movement. Instead you should use Moved(sender, where) to trigger the trap, or else your world state will get wrong. Even if you collect and resolve events first to avoid the above ordering problem (which means more trouble than you may think) with respect to the player's game object, the trap itself will left triggered although the player hasn't moved onto it. So you see, ordering is mandatory.

#5168581 Rotating camera with mouse

Posted by haegarr on 23 July 2014 - 01:07 AM

The problem is caused by mixing absolute and relative mouse co-ordinates. In the mainLoop you read relative co-ordinates xrel and yrel, i.e. the "mouse is moved by" co-ordinates. In the Camera::Rotate routine you then subtract that from the half width and half height, resp., what hints that you wanted to use absolute co-ordinates instead. Because your mouse motion is always lower than half width or height, you always get positive value for angle calculations, and hence your camera moves always in the same direction.


So, try to use e.motin.x and e.motion.y in the mainLoop. This should give you absolute co-ordinates. The effect is, of course, that the position but its motion of the mouse pointer defines your camera rotation angles. If that is not your desire then follow Waterlimon's advice "remove the width/2 and height/2 parts and it might work like you intended it to".

#5168381 Advanced game structure

Posted by haegarr on 22 July 2014 - 09:07 AM

... I thought of a Command pattern for player input and contolling enemies with an AI controller. That would let me easily attach the player actions(attack, ...) to Keys or MouseButtons. ...

The Command pattern, if you mean exactly those of the Gang of Four, is overkill in this case. It is fine to abstract input (in the case of the player character) and to "objectize" any character action, and the Command pattern goes in that direction. However, other features of the pattern are not so good in this case. You usually don't need a history of past commands (no undo, and usually no playback). You will not queue up commands, waiting for the previous ones to complete (because players will claim that your game is not responsive). You will not perform the actual action in some virtual Command::perform().


On the other hand, if you understand Command as a lightweight instruction for interpretation by e.g. the animation sub-system or whatever, then it's okay.


... But one problem I actually have,is that I want an easy way to add interaction between the player and world objects like treasures, enemies, traps, NPC´s and so on. I thought of an event pipeline(for example the player fires an event before he moves and objects like the collision layer or doors would react to that and would prevent the player from moving). So what would be a good approach to solve that "communication" problem. I thought of using the Observer pattern here as well, but I am not sure about that.

This is a real problem per se. You have to notice that the observer pattern introduces some kind of local view onto the world state. What I mean is that a single event happening in the world is seen by the observer at the moment it is generated, but it does not foresee all the other events that happen at the same simulated time but at a later runtime due to sequential processing. So the observer reacts on a world state that is incomplete! 


Look at the structure of a game loop and notice that it is build with a well-defined order of sub-system processing. Introducing observers often will break this well-defined order (for example the animation sub-system moves the character and the collision sub-system moves it back).


This is one of the examples where the problem analysis need to take into account the big picture. Observers are fine for desktop application where the entire UI is just reacting on events, but a game is (usually) a self running simulation.


In short: I would not introduce observers especially wherever sub-system boundaries are crossed, and I would think twice for observers in general.



Just my 2 Cent, of course ;)