haegarr how do you handle interactions between different game parts/objects?
This cannot be answered in a sentence except with "it depends" ;)
BTW: Don't think that I reject the use of Observer principally. All things have their use, even Singletons, but only at the right place.
I'm using a game loop in which sub-systems are updated in a defined order. This is, as said, not uncommon. I like to hint at the chapter of Jason Gregory's book that can be found as excerpt at Gamasutra. With such an approach each sub-system knows that the state of the world at the moment of its own update is complete with respect to all the sub-systems prior in the loop. For example, all animations are already run before collision detection is invoked. If collision detection would be invoked when only a part of the animations were run, then it may detect false collisions simply because some of the Placements are still set to their obsolete position. Later on, when rendering takes place, the rendering sub-system takes all Placements as they are. It is not interested in how many "change events" would have occurred; it is just interested in the current state. In this example states are written and read but no notification about state changes are made.
Another example is input processing. I do not propagate input events into the various sub-systems. The Input sub-systems knows of its devices, gathers / collects all available input state changes, timestamps and unifies them, and puts them into a queue (this process is not part of the game loop because the Input sub-system runs in a thread apart from the game loop). When a sub-system in the game loop is interested in input, it has to investigate the said input queue when it is called for update, detect input situations matching the own input configuration, and react accordingly. Here the input state as well as state changes are memorized (for some time, e.g. allowing for detection on input combos), but no notifications are made.
Another example is the implementation of the sense system. This is part of the AI sub-system where visual, aural, touch and even olfactory stimuli are produced and may be detected by AI (usually of NPCs). Some visual and aural stimuli are generated on-the-fly when the AnimationServices is updated. Other stimuli, also visual but also olfactory, last longer. Every existing stimulus is linked in a spatial structure where the AI components can query for collision with their respective senses. Also here the occurrence of a new stimulus is not notified.
That said, low level services like the SpatialService manage low level data like the Placement (btw, for game objects I'm using a component based entity architecture with sub-systems, although they are called "services"; just for the case you don't have assumed it already ;) ). Higher level services query lower level services for data, e.g. the SpatialServices provides queries for collision, proximity, and line-of-sight. The lower level services do not know how to deal with the results, that is the job of the higher level services. Hence there is a strict top-down coupling.