haegarr

Member
  • Content count

    4038
  • Joined

  • Last visited

Community Reputation

7402 Excellent

1 Follower

About haegarr

  • Rank
    Contributor

Personal Information

  • Interests
    Programming
  1. Converting 3D Points into 2D

    There are some things that make understanding your post problematic: A "center" is a point in space. How should "0,0 to -1,1" be understood in this context? You probably mean "a point in range [-1,+1]x[-1,+1]". Otherwise, you can normalize a point (with the meaning to make its homogeneous coordinate to be 1), but that is - again probably - not what you mean, is it? A position can be given in an infinite amount of spaces. Because you're asking for a transformation of a position from a specific space into normalized space, it is important to know what the original space is. Your code snippet show "out.pointlist" without giving any hint in which space the points in that pointillist are given. Are they given in model local space, or world space, or what? In the end I would expect that perhaps the model's world matrix and essentially the composition of the camera's view and projection matrixes are all that is needed to do the job. You already fetch viewProjection by invoking getCameraViewProjectionMatrix() (BTW a function I did not found in Ogre's documentation). What is wrong with that matrix? What's the reason you are not using it?
  2. Well, history shows that the forum is frown upon both walls of text and multiple subsequent posts of the same poster. If in doubt one can split off a thread, so that an aspect can be discussed in greater detail in a companion thread. It seems a bit complicated, but w.r.t. ECS one has the need to synchronize things over sub-system borders anyway. Sending messages in an unorganized way would not work, and notifying listeners in order ist just the push data flow variant where my way is the pull data flow variant. Regarding reversibility: Yep, I'm trying to detect error conditions early just to avoid hazardous situations. In the given case the aspect of asynchronous resource loading comes as addition into play. That resource loading means that resources may become available in the future, and of course you have to handle this in a way so that the sub-system can still work somehow. That is a well formulated description. If it isn't copyrighted, I'll tend to use it in future posts ... Already requesting creation/deletion jobs is kind of a modification. The main occurrences of non-read-only are when parts of internal components are enabled or disabled. For example, the SpatialServices manages Placement for the game objects. Those are the global positions and orientations. As such a Placement may be constraint (by mechanisms like Parenting, Targeting, Aiming, ...). While the execution of the constraint is driven by a task, the enabling is driven by service invocation. This ... ... is it. The coarse flow is from input disposal to entity management to simulation stuff to graphical rendering. Things like input catching, resource loading, and sound rendering run concurrently to the game loop. However, this ... ... perhaps needs some more discussion. Depending on how fire is simulated, it may or may not be implemented in an own sub-system. Sub-systems should provide generally useable services. If a fireplace is simulated by texture clip swapping, then it is nothing special and will be handled by the generic animation sub-system. If the fire is simulated by e.g. two particle systems then also a generic sub-system is used. Only if there is a special - say - physically based simulation or an extending forest fire, then a specific sub-system is useful. Think of duck typing: A game object made from a Model with a placement (of course), a mesh (looking like pieces of firewood), an animated billboard (fire), a particle emitter (with sparkle like particles), a second particle emitter (with smoke cloud like particles) ... gives you a fireplace. Nothing but the look of mesh and textures is specific to fire.
  3. Absolutely, although I would not say that services are read-only per se, but they are mostly read-only. Notice please that the S in ECS stands for "system" (or sub-system in this manner). This makes it distinct from component based entity implementations without (sub-)systems. The purpose of such systems is to deal with a specific more-or-less small aspect of an entity, which is given by one or at most a small number of what is called the "components". The (sub-)systems do this in a bulk operation, i.e. they work on the respective aspect for all managed entities in sequence. If this is done then we have an increment of the total state change done for all entities, and this is the basis for the next sub-sequent sub-system to stack up its own increment. You're right: This of course works if and only if the sub-systems are run in a defined order. That is the reason for the described structure of the game loop. Well, having a defined order is not bad. A counter-example: When a placement of an entity is updated, running a collision detection immediately is not necessarily okay, because that collision detection may use some other entities with already updated placements and some with not already updated placements. The result would be somewhat incomplete. You may want to read this Book Excerpt: Game Engine Architecture I'm used to cite at moments like this. However, this high level architectural decision does not avoid e.g. message passing at some lower level. When message passing is beneficial at some point, let it be the tool of choice.
  4. Mostly true, but there are some misunderstandings. The Model is a container class with a list of components. The sum of all the specific types of the components together with the therein stored parametrization constitutes an entity (or game object; I'm using these both terms mostly equivalent). The Model instance is used only during the entity creation process. It is a static resource, so it will not be altered at any time. Hence I wrote that its role is being a recipe, because it just allows the set of belonging sub-systems to determine what they have to do when creating or deleting an entity that matches the Model. The entity management sub-system does not know how to deal with particular components. It just knows that other sub-systems need to investigate the Model's components during the creation and deletion process, and that those sub-systems will generate identifiers for the respective inner structures that will result from components. Each sub-system that itself deals with a component of a Model has some kind of internal structure that is initialized accordingly to the parameters of the component. This inner structure will further be the part that is altered during each run through the game loop. Hence this inner structure is a part of the active state of an entity. Notice that this is a bit different to some other ECS implementations. Here we have a Model and its components, and we have an entity with its - well, so to say - components. There is some semantic coupling between both kinds of components, but that's already all coupling that exists. So the entity manager just knows how many entities are in the world, how many entities will be created or deleted soon, and which identifiers are attached to them. Even if the Model is remembered, the entity manager has no understanding of what any of its components means. EDIT: Well, having so much posts in sequence makes answering complicated. I've the feeling that some answers I've given here are already formulated by yourself in one of the other posts...
  5. The purpose of the entity manager (let's name the sub-system so) is to allocate and deallocate entity identifiers and to manage entity creation and deletion jobs. For this it has 4 job queues: * a queue where currently active creation jobs are linked * a queue where currently active deletion jobs are linked * a queue where currently scheduled creation jobs are linked * a queue where currently scheduled deletion jobs are linked Whenever a sub-system requests an entity creation or deletion, the entity manager instantiates a corresponding job and enqueues it into the corresponding scheduled jobs queue (a.k.a. it "schedules the job"). Whenever a sub-system asks for the current creation or deletion jobs, it gets access to the corresponding active jobs queue.The entity manager has a task that is to be integrated into the game loop in front of each task of any other sub-system that deals with entity components. When this task's update() is executed, it destroys all jobs enqueued in the both active jobs queues. This is because the previous run through the game loop had given all sub-systems the possibility to perform their part on creation/deletion of entities, so the currently active jobs are now rather deprecated. Then the currently pending jobs queues are made the new currently active jobs queues, so that ATM there are no longer any pending jobs. Notice that the entity manager itself does not really create or delete entities. It just organizes creation/deletion in a way that the other sub-systems can do creation/deletion w.r.t. their respective participation without ever becoming out-of-order. This is because wherever in the run through the game loop a sub-system means that an entity should be created or deleted, the actual creation/deletion process is deferred until the beginning of the next run, and the sub-systems get involved in the order of dependency. You may have noticed that the above mechanism works if and only if a sub-system's task actually does not cancel its part due to an inability. A typical reason for cancelation would be a missing resource. To overcome this problem, the entity manager actually deals with a fifth list: * a list where currently pending creation jobs are linked I said earlier that an incoming creation request creates a hob in the scheduled creation jobs queue. That is not exactly the case. Instead, the Model instance for which a creation is requested has a kind of BOM attached, i.e. a "bill or resources". Whenever the entity manager is requested for a creation, it first invokes the resource manager (which is the front-end of the resource sub-system) with the said BOM. On return the entity manager is notified whether a) all resources that are listed in the BOM are available; or else b) all resources that are listed and tagged as mandatory are available, but other are not yet; or else c) at least one resource tagged as mandatory is in the load process; or else d) at least one resource tagged as mandatory is finally not available. Then, only if a) or b) is the case, the entity manager enqueues the job into the scheduled creation jobs queue; if c) is true, the job is linked to the pending creation jobs list; and finally, if d) is true the job is discarded and an error is logged. Jobs in the pending jobs queue will eventually become scheduled in one of the following runs through the loop, whenever the resource management will notify that all mandatory resources are finally available. So the process of an entity creation is like this: * in run N a sub-system requests the creation of an entity * the entity manager immediately invokes the resource manager with the BOM * in this simple example, the resource manager returns a "all resources are available" * the entity manager schedules the creation job * all sub-sequent sub-system will not see the job, because it is not active yet * in run N+1 the entity manager's task's update() makes the previously scheduled job become an active job * subsequently in run N+1, the tasks of other sub-systems cause their part of creation to happen * at the end of run N+1, the entity is totally build and rendered the first time
  6. Make tris out of 2d set of points

    Yes, the first iteration does produce only triangles that use an edge (and hence 2 vertices) of the super triangle. Also the second and (I think) third iterations uses such vertices. The third iteration is the first one where a triangle occurs that is uncoupled from super-triangle vertices. That's fine, because you need to have at least 3 vertices in your original set to have at least 1 triangle as output. Look at the picture sequence of wikipedia's entry to Bowyer-Watson. Notice the red triangles: They are generated as needed but are rubbish in the end. The algorithm removes them in the last step. The blue triangles are all that remain as outcome.
  7. Make tris out of 2d set of points

    Notice that those step is done after all the points are added. So it does not remove the triangle just added, because any addition has been done much earlier in the previous loop. (Look at the indentation level.) The step is just there to get rid of all those wrong triangles that were build with the super triangle's corners. This is because the corners of the super triangle are build artificially; in other words, those points are not part of the original set of point. Hence also none of the edges and none of the triangles that use those points belong to the result.
  8. Your OP targets the architectural view, and there is of course not a single valid answer. And yes, diving into details will need further discussion in probably more threads. That said, here is an overview of how I manage the stuff. Several other solutions exist as well. You've already cited a post where the game loop is described as an ordered sequence of updates on sub-systems with the (graphical) rendering being the last step in the loop. That is still the case. However, the term "sub-system" is a bit like the term "manager": It's often too broad. So, let's say that any sub-system can have tasks and services. Such a task is needed to be invoked from the game loop, i.e. essentially one kind of update(...); I say "one kind" because a sub-system may have more than one update to be called, e.g. the animation sub-system comes to mind. The services, on the other hand, are routines that are provided for the use by other sub-systems. Now with tasks being executed one after one in order of the game loop, we can safely say that any task earlier in the loop is already completed when a later task is executed. Hence the later task can access the results of all earlier tasks by using the appropriate services. This makes a blackboard concept on the given level needless, because we clearly define execution instead of letting sub-systems self look-up whether they can do something. Regarding data ownership, the tasks define what sub-system owns what data and hence provides what service to allow access. However, chances are that data need to be duplicated. This may happen e.g. if a third party API (like physics) is used by a sub-system, and the API has its own idea of data organization. When I build the game loop, all services of participating sub-systems are registered, and the registry is passed to sub-system factories when a new instance is requested. Therefore sub-systems can look for services they have a dependency on, and store the resulting pointer. Now coming to the process of instantiation of game objects. First of, because there are several sub-systems involved for every single game object, instantiation is not an ad-hoc process. Remember that a task relies on earlier tasks having done their jobs, an instantiation has to began with the next run through the loop, or else some tasks may become out of sync. Therefore, if a sub-system wants a game object to be instantiated, it calls the (let's say) EntityService::scheduleCreation(Model const*) with the model resource as argument. (We need to shortly discuss what "model" means here in a later section.) As the service's routine name suggests, the service just generates a job and enqueues this in a job list. The same proceeding is done when a game object should be deleted by invoking EntityService::scheduleDeletion(EntityId). Well, the entity management sub-system also has an update task, and hence is integrated into the game loop. Because it manages entities, it must be placed very early in the game loop. The update then removes all active jobs and activates all scheduled jobs. That's all. Later on down the game loop, when a task of another sub-system that manages a component of game objects is updated, it first uses the entity service to get access to all of the active deletion jobs, and handles them accordingly to the purpose of the service (or perhaps sub-system). It then does the same with the creation jobs. Because all tasks depending on are already up-to-date, the task can safely access data of the other sub-systems at that moment. (Of course, in reality things are a bit more complex when considering features like resource prefetching and asynchronous loading.) Game objects are defined as entities with components, and the belonging data is managed by the sub-systems. So this is a CES - Component Entity System - way of handling game objects. A difference exists when this CES is compared to many of the existing ones: I use a model (which is just a container object) as recipe of how a game object (or "entity") has to be instantiated. That means also that the components are not used directly as they are attached to the Model, but are interpreted by the belonging sub-systems when those instantiate the game object. Hence the internal structure of components may (and often does) differ from the structure seen from the outside. The services then grants read-only access to the internal structure or an image of that. With this description in mind, a Model instance may provide a ShapeComponent with a mesh inside, a ParticleEmitterComponent, of course a PlacementComponent with positioning and orientation in the world, perhaps an AnimationComponent, a GraphicTechniqueComponent, ... and so forth. The different components are "consumed" by the belonging sub-systems. Phew, let me take a break here...
  9. Camera on Vehicle

    No offense, but most of your opening post is very confusing (at least to me). E.g. there is no such thing like a target position within a view matrix. Moreover, "farther viewing" should be done by zooming. And multiplying a position is not meaningful from a mathematical point of view. So it seems me that a deeper insight may be helpful ... Let's mostly ignore that the camera is somewhat special due to its view defining purpose. Instead, let's think of the camera as an object in the world like any other one. The placement (position and orientation) of the object w.r.t. the world is given as matrix C. Then the relation of a point (or direction) called v in the local space of the camera and its counterpart v' in world space is just v' = C * v Bringing C onto the other side of the equation like so (where inv(...) means the inverse matrix) inv(C) * v' = inv(C) * C * v = v defines the other way: Expressing a world co-ordinate v' in the local space of C. Especially w.r.t. the camera, C may be called "camera matrix", and inv(C) may be substituted by V what usually is called the "view matrix" in the context of cameras, because the view matrix is just the inverse of the camera matrix. (The math itself is universal to spaces regardless that we apply it to the camera in this case.) Attaching the camera to the vehicle means to make a geometrical constraint, so that moving the vehicle "automatically" moves the camera, too. This kind of thing is usually called "parenting" or - more technically - "forward kinematic". This means that we define a "local placement" L for the camera, i.e. a spatial relation that expresses the position and orientation of the camera not in the world but w.r.t. the vehicles world placement W. In other words, L defines how much translation and rotation must be applied to a vector in camera's local space so that it is given in the vehicle's local space. The formula needed to transform from a local space to its parent space is already shown above, where we've used the world space as the parent space. However, an indefinite number of parent spaces can be used. What we want here is to go from the camera local space into the vehicle local space, and from there to the world space. So we have v' = L * v v'' = W * v' or together v'' = W * ( L * v ) = ( W * L ) * v from what we see that the "parenting" just means to concatenate the particular transformation matrices. But be aware that matrix multiplication is not commutative, so the order of the matrices is important. In the given example we have a composited matrix W * L for parenting. Now the question pop up of how L is build. As a placement it has both a positional and an orientational part. Both can be set to fixed values, meaning that the camera is installed with a static device into the vehicle cockpit. Less strict parenting can be done, too. E.g. the position can be fixed while the orientation can be driven by targeting a "look-at" point. Let's investigate this example a bit further. So we define that the placement matrix L is composed from a translational and a rotational part, T and R resp., in the usual order (as you've hopefully noticed, this post uses column vectors): L := T * R To calculate the look-at vector, i.e. the unit direction vector from the position of the camera to the target point, the both positions must be given in the same space, and the resulting look-at vector will be in that space, too. Because L is given in vehicle space, and R (which the look-at vector is a part of) is hence also, we are interested in a vehicle local look-at vector. T is already vehicle local, but the target point p should be given in world space. So we need to transform it first inv(W) * p and can then compute the difference vector d = inv(W) * p - T * 0 where 0 denotes the origin point vector in homogeneous co-ordinates, i.e. [ 0 0 0 1 ]. From here normalization and matrix building is done as usual, so I neglect that stuff here. HtH
  10. Since the title mentioned DirectX (as I've noticed just now): AFAIK it uses z = 0/+1 as normalized depth range.
  11. The vertices you're interested in are the corners of a view frustum. They are give as the corners where the 6 view clipping planes meet. In normalized device co-ordinates they are the 6 sides of the unit cube, i.e. the cube with their sides at x = +1/-1, y = +1/-1, and z = +1/-1 or z = 0/+1 (all possible combinations in a vector [x,y,z] give you the 8 corners; notice that some APIs use z = +1/-1 while others use z = 0/+1). On the other hand, in camera space the frustum is limited by the near and far clipping plane and 4 other planes which depend on the projection mode (orthogonal or perspective) and its parametrization (view size and perhaps view angle).
  12. That depends on the co-ordinate space in which the said vertices are given. In general you just have to apply the transformation matrices that change the space relation of the vertices from the given one up to the world space. When chaining transformation matrices the correct order has to be obeyed, of course. Assuming that the vertices are given in camera local space, you have to apply the usually so-called camera matrix (equal to the inverse view matrix), because that matrix describes the relation of the camera local space in world space. If otherwise the vertices are given in normalized device co-ordinate space, you have to apply the inverse projection first before using the camera matrix.
  13. The while-loop slices totalDeltaTime (the total elapsed time since last iteration through the game loop) into equally sized (i.e. MAX_DELTA_TIME) fixed timesteps but only as long as so much (i.e. MAX_DELTA_TIME) duration is still available. Otherwise only the remaining time is used. That std::min ensures exactly that: Make deltaTime being MAX_DELTA_TIME as long as it fits in, but the remaining time if it doesn't fit. To actually work, a line like "totalDeltaTime -= deltaTime;" needs to be within the while-loop. Also the shortcut "totalDeltaTime>0.0" makes no sense without such a decrement.
  14. Unresolved external symbol?

    There is an argument-less constructor in the class gameInventoryMaster within which the method gameInventory::setSingleSlot is invoked. So look out for that. More an indirect include, i.e. an include in an included file or an include in an implementation file.
  15. Could anyone explain this math?

    While your thought is true in common mathematical understanding, it is just a question of definition for computer languages. In GLSL, for example, adding a scalar to a vector is possible; it just builds a vector where all components are set to the scalar and then performs a vector addition. GLSL specification, chapter "Operators and Expressions", section "Expressions", says: