• Advertisement

Basket

Member
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

199 Neutral

About Basket

  • Rank
    Member

Personal Information

  • Interests
    DevOps
    Production
    Programming
    QA
  1. Thank-you so much haegarr for your help, I think I understand things enough to have a beginnings of a architecture In my fire simulation example I was imagining something like a potential field or cellular automata implementation. As you say its a balance between defining and using more general or more purpose-specific " sub-systems. " The right granularity depends on the domain and experience. One really nice thing is that it does neatly compartmentalize things and should make things more easily understandable and allow for easier revision.
  2. So sorry for writing so many posts, I just wanted to add a bit more organization, so it woulnd't be a large wall of text. Ok so I think I understand your concept of " entity management system " ... its a bit complicated, but the basic idea is that (A) game object substantiation and deletion is deferred to next iteration of the " game update loop " so as to allow each " sub-system " to process the request (B) game object substantiation and deletion must be made atomic ( that is " reversable " on error ). That's basically it Your concept of " Model " is interesting, it basically amounts to a static configuration dataset used to " data-drive " the instantiation and configuration of purpose-specific objects within each " sub-system. " So I take it that there is no " shared and common " game object dataset maintained somewhere. Basically game objects are unique IDs : instantiating a new game object causes the " entity management sub-system " to allocate and maintain a unique ID which acts as the global identifier for all state associated with this game object maintained across the " sub-systems. " To obtain specific data about a game object we must query or other words read the internal state of the responsible " sub-system " for all objects with the unique ID assigned to this same game object. Something like " join " processing in databases. When you say that ... Could you please describe this a bit further ? I just want to have some idea of what services might not be " read-only. " Yes your " entity component system " sounds very much like that described by Adam at t-machine.org.... back in 2007 yeesh how time flies. I actually did not understand this particular architecture when I first read about it, but your description makes things alot more understandable. So thank-you This is the key insight ... Ok so I think I understand the basics enough that I can research for more information as required. Thank-you So what this means is that the " game update loop " is just a linear " pipeline " of " sub-systems " that progress from executing " game logic " -related tasks towards executing " drawing " -related tasks. There is no clear division, just a linear flow. So for example, a " fire simulation " " sub-system" task could execute and the later a " draw fire " " sub-system " task could execute, inspect the new state of the " fire simulation "" sub-system " and generate and emit one or more " draw jobs or tasks " to be enqueued in a queue for sorting and then drawing. So that's the " draw loop "
  3. Just a few more questions haegarr Basically three topics (A) the internal state of " sub-systems " (B) the information flow or communication between " sub-systems " (C) the execution flow of the " game update loop + draw loop. " From the sounds of things, it looks like there is no real big " scene manager " object of some kind. It seems that the " entity management sub-system " maintains the authoritative state of game objects ... as containers of components. The more specialized data being stored and maintained in individual " sub-systems. " The sorts of things that should be placed in the " game objects " and thus within the internal state of the " entity management sub-system " seem to be configuration data... and pretty much only configuration data. The more interesting data sounds like it belongs in relevant " sub-systems. " Things like spatial query acceleration data structures, bounding volumes, drawing -related data etc. In your first reply you mentioned that there is often significant duplication in data across the " sub-systems. " I take that means that there might be, for example, two quad-trees maintained each in two " sub-systems " in the " game update loop. " But what about efficiency ? Updating multiple duplicate or near duplicate data structures sounds inefficient. But I suppose it makes for more flexibility. And if things do get slow or memory consumption becomes too high, you can always " combine " common data structures into a single data structure maintained by and in a single " sub-system " and have other " sub-systems " query this " sub-system " as appropriate. Which brings me to the information flow or communication between " sub-systems. " The " read-only " thing about services provided by " sub-systems " really is interesting. My interpretation is something like the following .... please correct me if am wrong There are basic four things we might want to do : - Add new game objects - Delete existing game objects - Read properties of existing game objects - Modify or change properties of existing game objects Looking at the above list, the first two actions are relatively straightforward.... we (A) register a add or delete request ( as appropriate ) to be processed by the " entity management sub-system " in the next iteration of the " game update loop " (B) and then at the very start of the next iteration of the " game update loop, " have the " entity management sub-system " in its " update() " add a new game object instance to its internal state or delete a existing game object instance from its internal state ( a " game object " being a container with components ) (C) and then as each " sub-system " is " updated() " in this same next iteration of the " game update loop " it first adds or deletes data from its internal state to reflect the " delta " state of the " entity management sub-system " ( i.e. added or deleted game objects ). So we have a asynchronous addition and removal process of " game objects. " A request is registered in " N " iteration of the " game update loop " and this request is processed in the " N + 1 " iteration of the " game update loop. " The last two actions are more interesting. From the sound of things, the " game update loop " seems to follow a " linear information flow. " Basically in a single iteration of the " game update loop " a later ordered task of some " sub-system " can access and read from the internal state of some other " sub-system " that completed its execution earlier in this same iteration of the " game update loop. " So effectively the " public API " or in other words services or routines exposed by a " sub-system " are " read only. " This implies that the order of execution of the " sub-systems " comprising the " game update loop " is critical. I suppose this is where dividing the work of a " sub-system " into multiple tasks or in other words specialized ' update()s " is useful. This allows us to linearly order tasks more finely so that, for example, " task A " of " sub-system #1 " can execute after " task C " of " sub-system #9 " but before " task B " of this same " sub-system #1. " But what about modifying data across " sub-systems " ? In order to guarantee consistency, write operations would need to be asynchronous as well ... just like the addition and removal of game objects. So that means in the current iteration of the " game update loop " if some task of some " sub-system " wanted to write to the internal state of the " entity management sub-system " or some other " sub-system " it would need to (A) insert or register a request or message into a internal message queue of this same other " sub-system " (B) this same other " sub-system " would then during some " task " in the next ( or in other words N + 1 ) iteration of the " game update loop " would process this request or message and then insert or register a response or message in the internal message queue of this same " sub-system " (C) this same " sub-system " would then during some " task " in the next to next ( or in other words N + 2 ) iteration of the " game update loop " would process this response or message. So we use message passing, write request sent in " N ' iteration of the " game update loop " is processed in " N + 1 " iteration of the " game update loop " and a response ( as required ) is received and processed in the " N + 2 " iteration of the " game update loop. " Well anyways that's my questions for now. Sorry for writing so much. Newbie trying to understand these things
  4. Ok so I am back So if I understand things correctly, there is a sort of " loose coupling " between the game object representation and the various representations maintained by the various " sub-systems. " We have " game objects " as containers of " components. " And the whole set of " game objects " being maintained in the " entity management sub-system. " We then have each arbitrary " sub-system " then maintaining its own internal representations of this same " game object. " Typically this being some representation of one or more " components " of this same " game object. " In your example, you might have a " game object " of type " Model " with a " component " of type " ShapeComponent. " The " Model " contains the " ShapeComponent " and the entire assembly is maintained in the " entity management sub-system. " Now if and at some point in the execution of a " sub-system " ( either during a call to a service or during a periodic task i.e. a kind of " update() " of ), this same " sub-system " wants to instantiate a new instance of " Model " game object, it registers a asynchronous request to be fulfilled in the next tick or iteration of the " game update loop. " We can then amend single iteration of the " game update loop " to look something like this : At the very start of this current iteration of this " game update loop " the " entity management sub-system " removes existing game objects as requested to be removed in the previous iteration of this " game update loop" " entity management sub-system ... and then secondly adds new game objects as requested to be added in the previous iteration of this "game update loop." In our example this means that the " entity management sub-system " would process our request to instantiate a new instance of " Model " game object, by instantiating a " game object " container and then configuring it with a new instance of " ShapeComponent " component and then adding this composite object to its own internal data structure(s), or in other words its own internal state. At some relevant point in this current iteration of this " game update loop " the " sub-system " named " X " is requested to " update() " whereupon it first queries the " entity management sub-system " to retrieve a listing of all existing game objects that were removed and a listing of all game objects that were added in this current iteration of this " game update loop " as requested to be removed or added, respectively, in the previous iteration of this " game update loop " ... and then secondly processes these lists by removing arbitrary relevant data in its own internal state and adding relevant data in its own internal state ... and then thirdly goes on to perform its arbitrary meaningful " update() " work. In our running example this means that the, for example, " Mesh Drawing sub-system " would receive, from querying the "entity management sub-system," a listing of new game object instances that would include our new instance of " Mesh " game object. It would then detect the presence of the " ShapeComponent " component on this new instance of " Mesh " game object and then instantiate and configure and insert one or more, for example, static geometry meshes in its own internal data structure(s), or in other words its own internal state. Similar work would be performed on removing game objects. So in your case, the " game objects " as container objects effectively are configuration data that " data-drive " the substantiation and configuration of purpose-specific objects within relevant " sub-systems. " When you say this .... I take it that you mean that " game objects " are effectively configuration data that may and generally do not actually reflect the actual " underlying " data contained in and structure of the corresponding object(s) contained in the various " sub-systems. " In our example, the " Model " game object with a " ShapeComponent " component is more or less a set of simple primitive data e.g. a file path to a polygonal mesh to " load. " This " ShapeComponent " is " interpreted " by the relevant " sub-system(s) " and used to instantiate and configure, for example, a VBO object. Here the " ShapeComponent " component is a very simple data object... but the " sub-system(s) " then " translate " or " interpret " this simple data to some potentially very complex data. The following sentence I don't really understand Thinking this over, I think what you mean is that a " sub-system " exposes a " public API " ( excluding the task(s) or in other words specialized " update()s " that are called by the " game update loop " proper ) that allows later ordered " sub-systems " in the " game update loop " to query and obtain read-only data. The idea being that, as you stated in a earlier post, later ordered " sub-systems " in the " game update loop " read " up-to-date " data from earlier ordered " sub-systems. " But what if a " sub-system " wants to write to arbitrary data of another " sub-system " ? Is this done " synchronously " i.e. enqueue a request within a target " sub-system " to be processed in the relevant task or in other words relevant specialized " update() " of this same target " sub-system " of the next tick or iteration of the " game update loop " ? This is probably the biggest question I have from your explanation. Ok another long post, so sorry haegarr for writing so much and asking so many things. I really appreciate you taking the time and effort to explain things to me
  5. Thanks haegarr for taking the time and effort to explain things much further Your description of " sub-systems " pretty much answers most questions I had. So thank-you for that Now the bit about game objects is kinda hard for me to understand Ok so if understand correctly ... If and at some point in the execution of a game sub-system ( either during a call to a service or during a periodic task i.e. a kind of " update() " ) this same game sub-system wants to instantiate a new game object instance, it registers a asynchronous request to be fulfilled in the next tick or iteration of the " game update loop. " The reason for this is that for consistency reasons the current iteration of the " game update loop " assumes that no existing game objects are removed in whole or in part and no new game objects are added in whole or in part. That much I think I understand, thank-you for such clear and thorough explanations Now the confusing part to me is this bit here .... So game objects are maintained in a " entity management sub-system " which being a type of " sub-system " has an internal state and services it may provide and tasks to be executed at some one or more points in the " game update loop. " One particular task of the " entity management sub-system " is to be called " very early " ( I imagine at the very start ) of the " game update loop. " This task (A) first removes all existing game objects that were requested to be removed in the preceding tick or iteration of the " game update loop " (B) and then adds all new game objects that were requested to be added in the preceding tick or iteration of the " game update loop. " I think sounds about right, I am sort of confused because your describe this as " removes all active jobs and activates all scheduled jobs." The rest I don't really understand very well Shouldn't it be the responsibility of the " entity management sub-system " to ensure that all new game objects are added and existing game objects removed as requested ( in the preceding iteration of the " game update loop " ) ? Basically I imagine that the " entity management sub-system " should (A) first remove all existing game objects that were requested to be removed in the preceding tick or iteration of the " game update loop " by removing relevant data in its own internal state and then synchronously calling a relevant routine of each other arbitrary " sub-system " to remove relevant data from the internal state of this same arbitrary " sub-system " ; " sub-system " by " sub-system " in some known and regular order (B) and then adds all new game objects that were requested to be added in the preceding tick or iteration of the " game update loop " by adding relevant data in its own internal state and then synchronously calling a relevant routine of each other arbitrary " sub-system " to add relevant data into the internal state of this same arbitrary " sub-system " ; sub-system " by " sub-system " in some known and regular order. But from your description it seems that the work to remove existing game objects and add new game objects is performed " lazily " ... first the " entity management sub-system " does its work to remove existing game objects and then add new game objects , then as each arbitrary other " sub-system " has its own task executed it first performs state synchronization with the " entity management sub-system " by querying it for existing game objects that were removed and new game objects that were added and then removing and adding arbitrary relevant data to its own internal state before actually executing its meaningful work. So basically a single iteration of the " game update loop " looks something like this : at the very start of this current iteration of this " game update loop " the " entity management sub-system " removes existing game objects as requested to be removed in the previous iteration of this " game update loop" " entity management sub-system ... and then secondly adds new game objects as requested to be added in the previous iteration of this "game update loop" at some relevant point in this current iteration of this " game update loop " the " sub-system " named " X " is requested to " update() " whereupon it first queries the " entity management sub-system " to retrieve a listing of all existing game objects that were removed and a listing of all game objects that were added in this current iteration of this " game update loop " as requested to be removed or added, respectively, in the previous iteration of this " game update loop " ... and then secondly processes these lists by removing arbitrary relevant data in its own internal state and adding relevant data in its own internal state ... and then thirdly goes on to perform its arbitrary meaningful " update() " work. I think that's how it goes One question I have is why have the " sub-systems " asynchronously perform the removal and addition of data in their internal state in their respective " update() " before performing their arbitrary meaningful work ; in turn " one-by-one " over the course of the current iteration of the " game update loop " and not have the " entity management sub-system " just synchronously " force " this to be done at the very start of the current iteration of the " game update loop " ? Do you have some links to where I can read more information about " component entity systems " similar to how you have implemented yours ? I have heard about " component entity systems " , well discussed topic. But there seems to be many " styles " of " component entity system " and very many implementation details that are not well described or at least " glossed over. " This post is pretty long, I am going to make a new one below with further questions I have Thank-you so much haegarr for taking the the time and effort and writing so very clearly about these complex topics It really is appreciated
  6. Hello everyone, just returning to gamedev after a long break. At the moment I am trying to understand at a " high level " how drawing is performed in games. To be honest its been a a struggle ... I am finding it hard to grasp the " big picture " and key , salient points as there are very many pieces that integrate and interact in many various ways. I understand the " low level " details to a certain extent : VBOs , render targets , shaders , uniforms etc. etc. The big uncertain area to me is how the " game " interfaces with the " drawing. " Basically how is drawing organized , the division of responsibilities , the coupling between " game logic " and the " logical update loop " in general with the " draw loop. " I have tried searching the forums and while there has been much discussion about these topics, I am personally having a hard time understanding the various approaches. From searching the forums I found a nice post from " haegarr " in a nice thread by " Ansharus. " Basically " haegarr " describes the " draw loop " as being more or less an extension of the " game update loop " consisting of the following steps : - Iterate over all meaningful objects in the scene of interest while apply a filter to select which of these objects to draw and in what manner ( e.g. compute LOD , compute current transformation matrices , update particle positions etc. ) - For each object selected to be drawn , insert a " draw task or job " into a queue ( a simplification ), where a " draw task or job " is effectively a complete description of the salient information required to draw this object ( e.g. VBO IDs , texture IDs , etc. ). - Sort each list of " draw tasks or jobs " to minimize draw state changes. - Process each newly sorted list of " draw tasks or jobs " ... basically (A) issue OpenGL commands to set all state changes as explicitly indicated in the " draw task or job " (B) activate VBO(s) (C) activate shader program (D) issue OpenGL draw command(s) to draw the geometry. Now the above is a simplification, certain things may be more complex. For example, there may be multiple lists of " draw tasks or jobs " ( e.g. one for " opaque " objects , one for a certain " draw pass " , etc. ). I actually understand this description of a " draw loop " .... but certain things are unclear to me. The biggest area I find difficult to understand is how does the " game " communicates with the " draw loop " as described above. From searching the forums, the " standard " approach seems to be to maintain a " scene manager " as a sort of database. And then have this database be a kind of central and shared " blackboard " data structure between the " game logic update loop " and the " draw loop. " Basically at the simplest level, the " game logic update loop " writes information to the " scene manager " and the " draw loop " reads this same information from the " scene manager. " Or at least that's what I think From certain descriptions the " scene manager " sounds very much like a unsorted list. In other descriptions the " scene manager " sounds like some complicated set of data structures ... a quad-tree , unsorted list of everything, sorted lists of specialized things etc. etc. I understand that a " scene manager " is very much defined by the specifics of the game. But I would be really grateful if someone could describe examples of " scene managers " just for illustration. For example, if you were to develop a city roaming game like " GTA5 " what kind of data structure(s) would you use for a " scene manager " , what types of objects might be maintained in it , what would add objects to and remove objects from it , how would game objects be added to and removed from it and at what times in the " update loop. " That sort of stuff Basically the " execution flow " from start of a scene to the shut down of a scene from the perspective of communication or interactions between the " game logic update loop " and the " draw loop " through the shared " scene manager. " But that's just the " scene " ... what about the objects themselves.... One particularly confusing thing to me is how do the game objects control in some way how they themselves are drawn. Lets say we have a " spaceship " with very many meshes , particle effects , and other great things. How does this spaceship get instantiated in the scene as a complete drawable object ? Where is its various drawing -related data stored ? What components are responsible for executing what drawing -related work ? How do these components executing drawing -related work communicate with each other ? From searching around the Internet, most game engines like Unity3D , have the " game logic " as it executes explicitly add instantiate and configure objects that are drawing -related and then change internal member variables of these drawing -related objects as desired over time. So in our spaceship example, the spaceship " game object " would instantiate and configure a " particle effect " object, insert it into the " scene manager " and then change its properties over time to change the drawable representation of this same spaceship " game object. " But again, where is this " drawing -related " data stored ... from searching the forums it appears that the " scene manager " only contains some of the data ... mostly the shared data that allows for communication between the " game logic update loop " and the " draw loop. " The more " drawing specific " data such as VBOs is maintained within the " drawing system. " So effectively there is a division of data between what is maintained in the " game logic " ( mostly " game logic centric " ) and that maintained in the " drawing system " ( mostly " drawing centric " ). That sounds pretty reasonable .... but what about the division of work ? The " game logic " should not be " updating " the positions of particles ... or should it ? From mulling things abit, I think the above description of the " draw loop " suggests that its a more " gradual " transition .... the " game logic " performs more " higher level " work to set internal state for the following " drawing -related " work , and then the " drawing -related " work performs successive " levels " or " layers " of more " mechanical " drawing -related work until we have " draw tasks or jobs " that can be sent to be drawn by a " minimal " OpenGL or whatever " interface " layer. So in our example , the spaceship " game object " would instantiate and configure a " particle effect " object, insert it into the " scene manager " and then change its properties over time to change the drawable representation of this same spaceship " game object " ... but the " drawing loop " would be responsible for " updating " particle positions , performing collision detection of particles , performing " culling " of particles and doing other more " low level " and " more mechanical " particle drawing -related work in some regular order one-by-one until we have " draw tasks or jobs. " Does this sound right ? Well that's it ... sorry for the basic questions and the length of this post. I am very much a newbie to game development. I would really appreciate it if someone could possibly explain to me how games do their drawing from the " game logic " down to the " insert draw tasks or jobs in draw queue " in a very newbie friendly terms I sort of understand the parts after the " insert draw tasks or jobs in draw queue " ... its the stuff that occurs before that is confusing for me
  7. Server design problems

    Thank-you AcarX for taking the time to describe your architecture :D   That is a real self-explanatory diagram :D   Hmm I actually was writing a long post, I was more confused than anything because I had some weird misunderstandings :D   But then I took the time to re-read your original post and the advice provided by hplus and then it really became clear :D   I think with the advice provided by hplus is pretty sound, I believe PlaneShift uses a very similar publish-subscribe architecture.   You can read more about it here : http://www.crystalspace3d.org/downloads/conference_2006/planeshift_conf.pdf   The interesting architectural stuff starts on page 24.   There is also a presentation here : https://www.youtube.com/watch?v=tKSYJYV_RGs        
  8. Server design problems

      Could you please describe your architecture in greater detail ? :)   If I understand correctly your server basically defines a internal publish-subscribe style of messaging system that allows " modules " of game logic to interact by exchanging messages in a loosely coupled way.   The advantage of such a architecture seems to be that gameplay code can be decomposed or subdivided into separate and self-contained units with private and internalized functionality and data. Interactions between gameplay code can be represented by the synchronous or asynchronous exchange of messages. These messages should be abstract and self-defining, in which case they are basically abstract and atomic units of information concerning events or requests or commands. The messages effectively describe the interface between the various units of gameplay code.   With such an architecture you can more easily add or remove or change gameplay code. Just add or remove or change the relevant " module " as desired, just be sure that the the flow of events and the information contained in the flow of events is not somehow meaningfully changed ( e.g. removing a " module " causes certain events not be emitted or certain information not be communicated that other " modules " are dependent on ).   I have read a little bit about these kinds of publish-subscribe server architectures. Mostly in the context of web services I think. As for MMO games, I know that " PlaneShift " uses a similar architecture but aside from them, I have no clue :(   So I sort of know the theory in a vague sort of way .... but I don't know how to architect these " modules " or the event flow between " modules " :(   Are there code bases I could study ? Or maybe could you describe the execution flow between some of your " modules " ?   ( I am not looking at PlaneShift as its under the GPL <_<  )   For example, the interface or API of and messages received by and sent by the " party manager " or the " trade manager " or both (  ^_^  )  and may be the general execution flow of the server ?   Does your event system send events synchronously or asynchronously ? That's another grey area for me  :rolleyes: , I would imagine that its all synchronous messaging, basically amounting to chaining method calls.   Any insights you might be able to share would be really appreciated :D
  9. Thanks Norman for taking the time to explain things more simply. Its much appreciated :D   I think I am starting to understand transformations a bit better. I am going to play with transformation matrices a bit more and see if I can developer more of an intuition :)
  10. Thank-you Norman for further explaining how to transform from scene space into local space, your explanation is pretty clear :D     this is the answer to "how to change parent without changing final world position"   there will be an "attachment point" on a parent, this is where the child's appears  with no translation or rotation with respect to the parent. it may be the origin of the parent, or some other point with respect to the parents origin, such as the end of a bone, the bow of a ship for a turret, etc.   when the child is transformed into its final position in world space, you then subrtact its location form the new parent's location to get the offset to the new parent's attachment point.  you then subtract its orientation from the new parent's orientation to get the orientation relative to the new parent. you can then switch to the new parent, using the new off set and orientation to draw and the object will be in the same place and orientation in world space as it was with the old parent.  a football being handed off comes to mind. the football is the child. when the handoff occurs, and the parent of the football changes from the first to the second player's hand bone, the football will not move.     This is a very cool analogy :D and its pretty clear too :D   One thing that is a bit unclear is what do you mean when you say " here will be an "attachment point" on a parent, this is where the child's appears  with no translation or rotation with respect to the parent. it may be the origin of the parent, or some other point with respect to the parents origin, such as the end of a bone, the bow of a ship for a turret, etc. "   From my limited understanding, the origin of a coordinate space is implicit , you can specify a point to functionally consider and use as the " desired origin " ... an " attachment point ... but that point would be offset from the implicit origin of the coordinate space.   So if we use a " attachment point " like a " bow of a ship " as our desired origin for a child object, then we basically have to consider this " attachment point " as a parent object or parent coordinate space. Which means that we must consider the offset or relative position and rotation of this " bow of a ship " with respect to the local origin of the " entire ship " when we transform the position and rotation of our child object into scene space. Is this correct ?   So basically a " attachment point " is a child coordinate space set within a parent coordinate space or in other words a scene graph ?
  11. Then you could optimize for this rare case by not applying translation. But if these meshes are not even offset differently in parent space, then this must mean they overlap, which is messy.     Effectively they do " overlap " that's a great way of describing this special case :D and yes that could get messy with the vertices are specified in the same region of coordinate space. But its a special case for things like scene editors that support " prefabs " etc.
  12. Thank-you Norman for helping me as well, your explanations helped to clear up some of the things that nfries88 described          Perfectly clear         I don't understand this   If I have a scene graph comprised of a child object and a parent object, then the child object is positioned and rotated relative to its parent object and thus in the coordinate space of its parent object. Soooo why would we need to " subtract the parent's attachment point location [ ... do you mean local origin ? ... ] from the child's location " to get the child object's position and rotation relative to it's parent object... since we implicitly have it ?   Could you please walk me though a simple math example, lets say assuming only translations so to simplify things ?         Ok I think what your doing is computing the " transpose " of the rotation matrix as it is equivalent to the inverse of the rotation matrix ... or is this some of those other tricks that you mentioned   ?           I think that as long as I have a relatively " shallow " hierarchy and keep points or positions within sensible range limits then it should be alright           When you say " some other common frame of reference " do you mean some parent coordinate space that is shared , common to the two or more child objects ?         I take it that ' D3DXVect3Transform " is basically a means to apply a transformation matrix ( i.e. " model matrix " or " world matrix " ) to an arbitrary point or position ? So whats the difference between manually computing a transformation matrix and multiplying it against a point or position ?   I am not working on a flight sim, but I think I will have to learn abit of those local orientation operations. Joy
  13. Thank-you so much nfries88 for helping me         Hmm this is a bit confusing to me   ... I think what your saying is that if I have an object with a position and rotation expressed in the same coordinate space as the position and rotation of the camera then we effectively have expressed the position and rotation of the object in " scene space. " Is this right ?   In my case, I am trying to understand how to transform a point or position from local space into scene or global space so that I can do things like " pre-transform " vertices in editor tools and such.   So in these special cases, there is no actual " camera " but the idea is that we have a parent coordinate space ... which would implicitly be the coordinate space in which the position and rotation of the camera is expressed in.           Ahh I understand what your saying   What I was considering was how to transform a game object with a position and rotation expressed in the coordinate space of a parent game object into a " scene space. " I I understand correctly for that you would need to accumulate the transformation matrices ?   But as you said if you want to express a position and rotation relative to a parent coordinate space you just need to specify it within the parent coordinate space.           This is exactly what I wanted to know about that whole " how do I accumulate transformation matrices in a parent-child relationship." Thank-you   I think I may have poorly worded the question though   What I was considering was in special cases where I have a child object of some parent object that is nicely positioned and rotated relative to its current parent.   When we transform the position and rotation of this child object specified relative to its current parent object we get a final position and rotation expressed in scene space that is pretty nice for some purpose. All is well   But then lets say we re-parent this same child object to some other parent object.   Suddenly the position and rotation of this child object is expressed relative to the parent coordinate space of a different parent object.   So suddenly the final position and rotation of this child object expressed in scene space is very different from the " previous " or " original " position and rotation of this child object expressed in scene space.   In many cases we actually want to change parent objects but retain the same scene space position and rotation of the child object.   So effectively we need to position and rotate the child object relative to a arbitrary parent object in such a way as to be at the same scene space position and rotation even if we switch parent objects.         Very clear         Very cool that is a great trick         I understand what your saying. I was implicitly assuming that we have two objects without a position or rotation relative to a parent coordinate space.   For example , two meshes. Provided that both meshes have vertices expressed relative to the same local origin, then for purposes of combining these vertices into a single mesh, we can consider that both meshes are effectively expressed in the same coordinate space and so we can simply " merge " the meshes.   Now this simplification does not work if we assign a transformation matrix to either mesh. But if we have, for example , a scene editor tool and describe geometry in two meshes that share the same common local origin then effectively they are the same " composite " mesh just described in two different sub-meshes.   Or at least that's what I think    
  14. Hello everyone, I am a newbie game developer   At the moment I am trying to understand matrix operations and how they can be applied for 3D drawing and for 3D gameplay tasks. Its tough so far as I am having trouble visualizing the kinds of things that are possible and with what sorts of matrix operations and in what order of matrix operations   Most of the tutorials I have found focus on drawing simple meshes to the screen and so describe the " model-view-projection " accumulation of matrices but not much beyond that.   The biggest problem area for me is figuring out in a intuitive and visual sense how to transform between coordinate spaces.   Some of the things I am trying to grasp and looking for answers are :   How do we transform a element expressed in a " local coordinate space " into " scene coordinate space " ? Lets say we have a position or point " pointA " expressed in the " local coordinate space " of game object " A " and we have assigned a position and rotation and scale expressed in " scene coordinate space " to this same game object " A ".   Basically this scenario is equivalent to transforming a arbitrary vertex of some mesh that has been assigned a position and rotation and scale expressed in " scene space " into " scene space. "   If I understand correctly, we need to translate the given position and rotation and scale expressed in " scene coordinate space " that we assigned to game object " A " into a transformation matrix and then we need to perform the following :       transformedPointAInSceneSpace = ( transformationMatrixOfGameObject"A" )  * (  Vector4( pointA , 1 ) )     We don't need to concern ourselves with the " view matrix " or the " projection matrix " ... just the " model matrix. "   Am I correct ?   How do we " combine " or accumulate the transformation matrices of objects in a parent - child relationship so that the position and rotation of the child object is expressed relative to the position and rotation of the parent object ? Basically how do we accumulate or combine " coordinate spaces. "   Lets say we have a game object " A " with a transformation matrix  and a game object " B " with a transformation matrix and that game object " A " is the parent of game object " B "   If I understand correctly, we need to perform the following :   transformationMatrixOf"B"ExpressedRelativeTo"A" = ( transformationMatrixOfParent"A" ) * ( transformationMatrixOfChild"B" )   Is this correct ?   How do we maintain the current position and rotation of a game object expressed in " scene coordinate space " when assigning it as a child of another game object ? Lets say that we have a game object " B " that is in a existing parent-child relationship with game object " A. " such that game object " B " is a child of game object " A. "   Now lets say that we " re-parent " game object " B " to game object " C. "   But to complicate matters, lets say that we want to maintain or retain the current position and rotation and scale expressed in " scene coordinate space " of game object " B " with the previous parent game object " A " after we have " re-parented " game object " B " with its new parent, game object " C. "   How would we go about this ?   First I think that we need to accumulate the transformation matrix of child game object " B " with the transformation matrix parent game object " A " so as to compute a position and rotation expressed in " scene coordinate space " of game object " B "   transformationMatrixOf"B"ExpressedRelativeTo"A" = ( transformationMatrixOfParent"A" ) * ( transformationMatrixOfChild"B" )   Then we would need to transform this transformation matrix representing a position and rotation expressed in " scene coordinate space " into the " local coordinate space " of game object " C. "   Which leads to my next question   How do we transform a position or point expressed in " scene coordinate space " into a corresponding point or position expressed in the " local coordinate space " of a game object and vice versa ? Lets say we have a position or point " pointA " expressed in the " scene coordinate space " and that we have a game object " B " and that  we have assigned a position and rotation and scale expressed in " scene coordinate space " to this same game object " B ".   If I understand correctly in order to transform a given position or point " pointA " expressed in " scene coordinate space " into a corresponding position or point expressed in the " local coordinate space " of game object " B " we need to first translate the position and rotation and scale expressed in " scene coordinate space " that we assigned to game object " B " into a transformation matrix then we need to " invert " this same transformation matrix and then multiply it with the position or point " pointA. "       transformedPointAInLocalSpaceOfGameObject"B" = ( Math.Invert( transformationMatrixOfGameObject"B" ) )  * (  Vector4( pointA , 1 ) )     Now how to do the opposite ?   Lets say we have our newly transformed point or position " pointA " expressed in the " local coordinate space " of game object " B. "   Now how to we transform this point or position " pointA " into " scene coordinate space " ?   I think we can simply do the standard :       transformedPointAInSceneSpace = ( transformationMatrixOfGameObject"B" )  * (  Vector4( pointA , 1 ) )     What about " inverting " the " inverse " of the transformation matrix of game object " B " and multiplying this with point or position " pointA " ?       transformedPointAInSceneSpace = ( Math.Invert( Math.Invert( transformationMatrixOfGameObject"B" ) ) )  * (  Vector4( pointA , 1 ) )     Is this possible to ? Which is the better approach ?   How do we transform a position or point expressed in one " local coordinate space " into another " local coordinate space " ?   This sort of a weird question but it follows from talking about " local coordinate spaces " and " scene coordinate spaces " and translating between them   I think since we have two " local coordinate spaces " we effectively have a single " local coordinate space " and so any points or positions are in and operations are performed in effectively same single " local coordinate space " implicitly.   So no " transformation " step is required.     Looking at the above questions, they are pretty basic ... but I am really new at 3D transformation stuff   But I think that they pretty much cover the basic matrix math stuff that I am having difficulty with.   For more complex stuff could anyone please point me to a real good matrix math tutorial that moves beyond the simple math operations and describes actual and " common " gameplay and drawing usage scenarios ... and does so in basic and newbie friendly way for people that have difficulties with math ?   Or instead could someone please walk-through some common examples in simple language and simple maths how I can I do those " common " gameplay and drawing related things, like those described above, with a " standard " math library like JOML ( for Java ) or GML.   Thank-you            
  • Advertisement