Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 02:22 PM

#5267639 Replacing ECS with component based architecture, or traditional OOP, or somet...

Posted by haegarr on 23 December 2015 - 08:06 AM

Well, your thread title does not make much sense. ECS is a buzzword in game development with no concrete definition. It just says that there are entities (also called game objects) which are composed of components. As such an ECS is a component based architecture. Furthermore, composing is the second programming idiom besides inheritance, and both are idioms of traditional OOP. Its just a fact that inheritance was overemphasized in many books and internet sites, so that composition was not recognized to the degree it should.

 

When you ask whether you should step away from composition and go with inheritance than think again. There is a reason why your trials in ECS world appear to be so smooth. Both composition and inheritance are tools, and you should use the right tool for the right problem.

 

Another thing to consider is the difference between code and data. It is much more elegant to make differences in game objects by parametrization (i.e. giving them different data) instead of inheriting another class. For example, the look of a spell is a different set of animation data and its damage is just a different value. So there is no need to make a Spell base class and inherit a FireBolt and an IceBolt class from it; the base class is sufficient (in this example; of course, situations may arise where having 1 or 2 inheritance levels would be okay).

 

The existence of "systems" to handle the components is a step that makes the management of all of the components easier. This is because an entity's components can be registered with the belonging systems when the entity is instantiated and added to the game world. Notice that components, although named commonly so, are ver different in their meaning (e.g. a Placement component and a SpellSpawner component). Due to their inherent differences, it is not logical to manage them the same way. If there are systems to manage them, then each system can be implemented in its own optimal way.

 

So, for me, going that way has no real alternative if you plan to do something more complex, especially if you don't want to re-program for each single game design decision. Of course, a small and isolated example like your scenario can be implemented in any other way, too.




#5267624 dealing with a multiplicity of AI states

Posted by haegarr on 23 December 2015 - 05:02 AM

From my non-professional point of view: There is the layer of reasoning, where IMHO a utility-based selector fits very well. There is the layer of executing behaviors, where a BT fits well in case that the behavior can be pre-defined, or a planner in case that the behavior is assembled at runtime. (Although the borderlines are blurry; of course, there are "work-arounds" to solve problems of not well fitting kinds in each of the AI solutions.)

 

Let's use a warrior that becomes facing a wild horde of enemies. A utility-based reasoner investigates some options, one of them being "flee_in_panic". This option occurs to have the highest utility, and hence is selected. The option's action part is in fact a BT sequence node. The encoded behavior is a "drop_weapon" action, followed by a "turn_around" action, followed by a "flee_from" action. While the former 2 actions are more or less direct instructions to the motion / animation layer, the 3rd action is a motion planner node. This node requests a path from the path finder sub-system, and starts executing it. For each path segment the appropriate sub-behavor is selected due to hints given with the path segment.

 

When looking at such a tree, the reasoning is done on the entirety of options at once, where utility-based selection shines. The designer wants the agent to drop the weapon, rise it hands, turn suddenly around, … hence an a-priori known behavior. Here a BT is useful (but that does not mean that a BT is restricted to fix behaviors; it is just so that a BT is fully pre-defined). Running along a path, however, is depending on the path characteristics which may be known at runtime first. Hence using a more flexible device like a planner seems appropriate.




#5266061 ECS: Systems operating on different component types.

Posted by haegarr on 12 December 2015 - 02:13 PM

Many of such "problems" disappear immediately if one thinks of a standard solution (here e.g. simple lists if only few objects are involved, space partitioning, influence maps, …) encapsulated within systems with appropriate interfaces, and understand components as data suppliers. There is no enforcement that component structures used internally are the same as those used externally of sub-systems.

 

For example, when an entity with an "interactive usage" component is instantiated, the component causes the "interaction sub-system" to prepare its internal representation so that it considers the component or its meaning at least. Because it provides an interface that allows to query, other sub-systems can ask for, say, any interaction possibility within a given range. When the player controller detects the input situation "interactive use" when running its update, it queries the interaction sub-system for suitable devices.




#5254230 Smoothing me some normals.

Posted by haegarr on 27 September 2015 - 03:14 AM

// set all normals to zero
for each vertex normal (n)
  n = 0,0,0

// add in each face normal to each vertex normal
for each face
  fn = calculate face normal 
  for each vertex normal in face (vn)
     vn += fn

// normalize normals
for each vertex normal (n)
  normalize(n)

This algorithm calculates a vertex normal by averaging normals of surrounding faces. While there is nothing inherently wrong with it, one usually wants to consider a weight so that the face normals have differently rated influence. A typical weights are the face areas, another is the angle of the face at the vertex of interest.




#5254121 Intersect ray with heightmap

Posted by haegarr on 26 September 2015 - 06:00 AM

One alternative to a brute force method would be to reduce the set of triangles to possible candidates. Another possibility is to use approximations first and compute more costly methods only if the approximation says so. For example:

 

A height map has a regular grid of z samples in the x/y plane. When seen from above, it looks like a regular arrangement of quadratic cells. Each cell has 8 neighbors (or less if being placed at an edge of the map). The ray, also seen from above, passes through these cells. So you can handle this as a 2D problem first: Start at the cell which contains the camera, calculate through which of the 4 edges the ray leaves the cell, determine the neighboring cell at that edge, and continue the same from there. In this way yo can iterate those cells that are touched by the ray. Now, before hopping to the next cell, determine whether the ray passes through the ground of the cell. If not, then go to the next cell; otherwise the cell of interest is found.

 

The ground test can be optimized, too. If you have the minimum and maximum height value of the current cell, and you have the entry and exit heights of the ray, then a first test would be a simple interval overlapping test (like a 1D bounding volume hit). Notice that the entry height of a cell is the same as the exit height of the previously visited cell, so passing this value would be beneficial. Notice that the entry / exit heights again are computed by ray / plane intersections, but the planes are axis aligned and hence the intersections are easier to calculate.




#5253083 Skeletal animation is hella slow

Posted by haegarr on 19 September 2015 - 01:00 PM


I'm sorry but I still don't get this. When you say "animation sub-system" do you mean my AnimationController class?
I implemented it in a way that's every AnimationData and SceneNode* pair have a matching Timer object for their animation.
Can you explain further?

With "animation sub-system" I do not name a specific one but a group of collaborating objects, and one of them is suitable to fulfill the discussed task. A class named AnimationController usually does not what I mean, but maybe yours do so.

 

For further explanation, let me begin with the game loop. The game loop is commonly organized as a repeated execution of a defined sequence of tasks. Game time advancing, input collecting, player character control, animation playback, physics simulation, … and finally rendering are typical tasks. From the point of view of the game loop, these things are high level tasks that, for the sake of separation, are usually associated with various sub-systems like input, player control, animation, physics, rendering, and so on. So the game loop has a list of such tasks, and it calls each one by an update(time, delta) or so. This is also true for the animation sub-system; it may look like

    animation->update(time, delta);

meaning "update all animations in the scene". So the routine iterates the running animations and updates each one (and here I would expect an AnimationController, one per animation). Now, this iteration should not go through the scene tree and look out for animated nodes. Instead it should go through an internal list. This list holds nothing but each and every animated node. Iterating it means that every found node is known to be animated. No need to determine this property, no need to skip "inanimate" nodes. Further, the animation sub-system has the opportunity to order the nodes as is most suitable for, so not being dependent on the order in the scene tree.




#5252868 Smoothing me some normals.

Posted by haegarr on 18 September 2015 - 06:54 AM


Best way to smooth normals. I know of one method, not sure if it's very memory efficient.

The informations you gave us are very sparse.

 

1. Which criterion do you use to qualify what good, better, and best is in this context?

2. What does "smooth normals" mean? An average of normals as it occurs when computing a vertex normal from the normals of surrounding faces? Or an interpolation of normals as it occur to fill gaps between samples? Or realigning existing normals to appear smoothly arranged? Or something else?

3. Which method is the one you know?

4. How is normal "smoothing" meant to be memory efficient / inefficient?




#5252855 Skeletal animation is hella slow

Posted by haegarr on 18 September 2015 - 05:07 AM

Is this really my problem?

May be, may be not. How much time is consumed here and there can be determined only if you make a runtime analysis. However, the points shown do all contribute to your runtime problem, some more than others.

 

Why? Do you mean I shouldn't generally use them, or just in this case?

You should not use them in dependence on the invocation frequency and available time. You want to write a realtime application and need to expect several hundreds(+) invocations. Although the std containers are not necessarily bad, they are made for general use and, more or less, desktop applications.

 

You mean iterate through the animation data map, find the corresponding SceneNode for every element, and set its local (animation) matrix?

More or less, as long as "find" does not mean a search. As I've written in a post above, let the animation sub-system manage its own data. If you need an access back to the scene node, then let the sub-system have a list of pointers to all animated scene nodes. Iterating the list and accessing the belonging scene node is then a fast operation.

 

This part has to be done recursively right?

Well, not really. If you iterate the tree top-down, then a parent's world transform is already calculated when you visit its children, so that they can rely on its up-to-date state. No recursive calculation necessary. If you follow the mentioned DoD approach for the parent/child relations as well, then order the matrices so that parent world transforms are hit before those of the associated children.




#5252841 Skeletal animation is hella slow

Posted by haegarr on 18 September 2015 - 03:35 AM

Seconding all what Hodgman listed, except that I would rephrase his "perhaps you need to iterate through your animation data as the outer loop first ..." as "you should iterate through your animation data as the outer loop first ...". Any sub-system should manage their data themselves. A skeleton animation sub-system, for example, should hold a list of all skeletons that need to be updated. A skeleton gets registered / unregistered with the sub-system when it is instantiated into / removed from the scene. This unburdens the sub-system from iterating the scene tree and touching all those unneeded nodes.




#5252723 GPU brush strokes and "undo"

Posted by haegarr on 17 September 2015 - 12:57 PM

There are (at least) 3 undo principles:

 

a) Inverting the effect of the last action;

b) restoring the memorized state that was valid before the last action (as you do ATM);

c) replaying the history of actions exclusive the last one.

 

No one of them is per se suitable for pixel painting programs:

 

a) is not possible because information may be lost due to the former application of the action;

b) costs masses of memory (and bandwidth in your case);

c) costs much performance if the painting has progressed too far.

 

A way that is suggested now and then is to combine the above possibilities with the goal to lower the average costs. For example, a memento is made only after N actions, and for then at most N-2 remaining actions a replay is done. The drawing area can be tiled for the purpose of storing a memento, so that just the tiles effected during the last N-1 actions need to be memorized. Older mementos can be externalized by a background job.




#5251484 orbit camera math

Posted by haegarr on 10 September 2015 - 02:33 AM


this works fine , but it have a max of 0.6 for x and 0.4 for the y i would the max to 2pi for the x and pi for y.

Please look into the other thread.

 


my question is generic, but how i can convert a value for ex from -10 ,15 to 0-360?

Assuming you want to map this linearly, you need to do

1. subtract the lower limit, here -10, so that the new lower limit is at 0

     ( -10 .. 15 ) - (-10) => 0 .. 25 

2. normalize the range by dividing by the difference of lower and upper limits, here 15-(-10)=25, so that

     ( 0 .. 25 ) / 25 => 0 .. 1

3. multiply by the desired range, here 360-0=360, so that

     ( 0 .. 1 ) * 360 => 0 .. 360

4. add the desired lower limit, here 0, so that

     ( 0 .. 360 ) + 0 => 0.. 360




#5251478 orbit camera

Posted by haegarr on 10 September 2015 - 01:31 AM


1)the pitch has a limited range : when i move the mouse up and down ,the mesh is rotated of 10/20 degree around the x axis

Well, one mistake is made by me in post #10. The value s must be half of what I've written, hence

    float s = glm::min<float>(m_width, m_height) * 0.5f;

Sorry for that.

 


2)moving the mouse from left to right i have a pitch variation and i doesn't understand why , it must be change only in the top to bottom or bottom to top?  phi and theta are related to x and y of the mouse i don't understand where i wrong

This kind of solution does not work as you expect. Because it uses the atan2(y,x) function, phi is an angle from the positive horizontal axis x in CCW direction (CW, depends on your co-ordinate system) around the screen center. If you (would be able to) drive the mouse in a perfect circle around the center, you get a smoothly varying phi and a constant theta. On the other hand, if you drive the mouse in a straight line from the center to the outside, you get a constant phi and a smoothly varying theta. Well, at least you should get that due to the chosen model of camera rotation.

 


the phi the corners is always 55 and the theta:
...

I asked for (xp,yp) and not phi for a specific reason: If the co-ordinates are already wrong, then calculations based on those co-ordinates give nonsense in a probably not retraceable way.

 

On a 800 x 600 screen / window, and considering the correction I mentioned above, the variable s should obviously be determined to be 600. At the left edge mouse x would be 0, and hence

    xp = (0 - 800 / 2) / 300 = -400 / 300 = -1,333

and at the right edge

    xp = (799 - 800 / 2) / 300 = +1.33

Similarly at the top edge and bottom edges
    yp = (0 - 600 / 2) / 300 = -1
    yp = (599 - 600 / 2) / 300 = +0,997
 

Can you confirm this? Because here ...


float xp = ((m_deltax - m_width / 2) / s);
float yp = ((m_deltay - m_height / 2) / s);

you seem to deal with delta values of ouse motion. That would not be correct. You need to use absolute, i.e. mouse position values for this kind of solution.

 


but what that i not understand is: the semisphere is not a radius 1 semisphere?
and why the at the corners go from 0.68(xp) and 0.43(yp) [...]

Yep, the normalization by s should made it a unit-hemisphere. But because of the mistake a hemisphere with radius 0.5 was computed so far.

 

BTW: A yp of 0.43 is wrong even when considering the wrong s. If you ran that stuff in a window with borders, you need to use the inner window size instead of the screen size. Do you do so?

 


[...] and not from 0 to 1?

The value range should be [-1,+1) in vertical and [-a,+a) in horizontal direction, where a is the aspect ratio.



#5250668 Does anyone know which OpenGL state did I screw up?

Posted by haegarr on 05 September 2015 - 02:07 AM

As a rule of thumb: IMO a rendering sub-system should not (with one exception, see below) rely on a state. Each rendering job should send a full set-up description, including any related parameters that can be changed by it at all. (I.e. in the case of models: VB/IB set-up, material belonging things, blending, primitive mode, shading, and so on.) Then the lowest layer just above OpenGL can be used to compare the requested set-up against an internal image of OpenGL's set-up, and differences then yield in OpenGL calls and, of course, changes to the internal image. This method is cheap enough and avoids confusion just like those in the OP, and it is useful for decoupling purposes.




#5250260 orbit camera

Posted by haegarr on 02 September 2015 - 08:16 AM

I had some problem to decipher your post (no offending), so bear with me if I misunderstood what you meant ...

 

1)the quaternion need only one angle for create a angular displacement , is correct? why now two angles?for the two quaternions that must be interpolated?

A quaternion, in fact a unit-quaternion, is a kind of representation for rotations. As such it encodes an axis of rotation and an angle of rotation (and it has a constraint that its 2-norm is 1, else it would be a unit-quaternion and shearing would appear).

 

Interpolation means to calculate an in-between, having 2 supporting points (or key values) at the limits. Whether these 2 supporting points are spatially or temporally or whatever related plays no role for the interpolation. What "2 quaternions" do you want to interpolate? The control schemes described above do by themselves not have an urge to use quaternions. If you speak of a smooth transition of the current orientation to the next, then the one support point is the last recently used quaternion and the other is the newly determined (from mouse position / movement) one.

 

2)i see the squad and there is two quaternion and a variable t time? then i must get the time for each step ? and how i can convert the t to [0-1]

The 2 quaternions are the said support points, and the free variable (you used t, I will use k below) denotes where the in-between is located between the support points. You can compute an in-between only when you provide a value for k, yes. (But, as said, t need not be a time value.) How to determine a suitable k depends on what you want to achieve. For example, if you want N interpolation steps that are equally distributed within the allowed range [0,1], then you would use

    kn := n / N   with   n = 0, 1, 2, …, N

where kn is the value for k at step n. Notice that n increments by 1 from 0 up to N, inclusively; this would be implemented as counting loop, of course. So you get

    k0 = 0 / N = 0

    kN = N / N = 1

as is required for the interpolation factor by definition.

 

If, on the other hand, you want the interpolation run over a duration T and started at moment in time t0 (measured by a continuously running clock), now at a measured moment t, then

    k( t ) := ( t - t0 ) / T   with   t0 <= t <= t0+T

so that, as required by the interpolation factor definition,

    k( t0 ) = ( t0 - t0 ) / T = 0

    k( t0 + T ) = ( t0 + T - t0 ) / T = 1

 

As you can see in both examples above, the allowed range [0,1] is achieved by normalizing (division by N or T) and, in the case of the timed interpolation, by first shifting the real interval (subtraction of t0) so that it originates at 0; the latter part was not necessary in the first example because it already originates at 0.

 

and how i can transform the position of the mouse to the hypersfere? i must project? how? [...]

Well, a hemisphere (half of a full sphere) is luckily not a hypersphere (a sphere in more than 3 dimensions)!

 

Let's say the mouse position is the tuple (mx,my) and the screen size is given by (w,h) in the same co-ordinate system as (mx,my). Then the relative mouse position is

   s := min( w, h ) * 0.5    << EDIT: must be halved to yield in a proper [-1,+1] normalization, hence the 0.5

   x' := ( mx - w / 2 ) / s

   y' := ( my - h / 2 ) / s

 
The position is within a circle as described in a previous post only if
   x'2 + y'2 <= 1
otherwise the mouse is out of the range of our gizmo! If inside, then the tuple (x',y') denote a normalized position within the projected circle.
 
A point (x,y,z) on a hemisphere is described by spherical co-ordinates by 
   x := r * sin( theta ) * cos( phi )

   y := r * sin( theta ) * sin( phi )

   z := r * cos( theta )

Due to normalization we can ignore the radius because it is 1.

 

If we divide y by x we achieve

   y / x = sin( phi ) / cos( phi ) = tan( phi )

and hence we can compute phi' for our relative mouse position (x',y') using the famous atan2 function as

   phi' = atan2( y', x' )

 

For theta or z, resp., we have 2 ways. One of them is derived from the fact that each point on the unit sphere is 1 length unit away from its center. That means for use

   x'2 + y'2 + z'2 == 1

so that for our z', considering that we use the "upper" hemisphere, have

   z' = +sqrt( 1 - x'2 - y'2 )

This is valid due to our above formulated condition that the mouse position is within the circle.

 

Hence we can calculate

   theta' = acos( z' )

 

Now we have 2 angles, phi' and theta'. What is left over is how to map that onto yaw and pitch, a question you need to answer.




#5249867 entity component system object creation

Posted by haegarr on 31 August 2015 - 06:42 AM

You have a factory method in your runtime that delivers a new instance of the requested kind. The factory method knows a recipe for every kind that can be requested. The recipe may be

 

a) a hardcoded routine; this has the drawback of not being as flexible as a data driven approach, and hence causes maintenance problems in the long run; it is, however, quickly implemented;

 

b) a prototype, i.e. a completely assembled instance, that is deeply copied and perhaps partly re-parametrized by the factory; this variant is what Juliean suggests if I understood it correctly;

 

c) a prescription of how to instantiate and assemble a new entity; the prescription is processed (e.g. interpreted) when needed;

 

You can use combination of them. For example, a) or c) can be used to generate the prototype for b). Moreover, both the prototype as well as the prescription can be read from mass storage.

 

d) In the former case of the prototype we speak of de-serialization. It requires that the instance is build and serialized once, and can be deserialized then as often as needed (once per application start in our use case). As such the representation on mass storage is close to the representation in memory, so that loading it is relatively fast and re-interpretation of what is read is reduced to a minimum.

 

e) in the case of a prescription loading is a breeze, because you load just data that is, however, then later to be interpreted by the factory nonetheless. You can use a binary format or a text format for the file representation. The text format, together with a "human readable format" specification", may have the advantage that you can use any text editor to define the prescription at your will. XML and JSON (and similar formats) are often used to do so. However, XML is somewhat bloated but provides additional stuff like their attributes.






PARTNERS