# haegarr

Member

4046

7406 Excellent

• Interests
Programming
1. ## Help me understanding the diamond square algorithm

For terrain a possible outline to start: 1) compute the height field 2) perhaps clamp heights below sea-level to the height of the sea-level 3) use a color ramp For clouds a possible outline to start: 1) compute the height field 2) clamp the heights below "blue sky level" to that level 3) colorize the sky level to blue and blend other heights with a dark gray, using the blend factor proportional to the relative height Nevertheless, I don't understand why to inherit colors from corners somehow. I think because the entire area is the first square, using inherited colors would ever give a much too smooth colorization. There may be a sophisticated way I'm currently not aware of, so if you have a reference why inheriting colors please tell us...
2. ## Help me understanding the diamond square algorithm

The diamond-square algorithm is not about coloring but about height: It generates a height-map. I.e. it adds the 3rd (height) component in a semi-random fashion to a given 2D grid. You can, of course, do colorization by mapping height values into colors, but currently I don't see a meaning in "inheriting" color from the corners. So, what exactly is the goal of your attempt? What are the colors you mentioned good for? E.g., if you want to colorize the heigh-map like a terrain (white for snow on the mountains and such): Make a color ramp, take the resulting height of a grid point, normalize the height, and use the normalized height to address a color in the ramp.

AFAIS: The value of vertex_weight/@count is identical to the count of vertices in the skin mesh. It is also the count of numbers in the vertex_weight/vcount array. For each vertex in the mesh, in the order of the vertices, the value at index n in the vertex_weight/vcount array denotes how many influences are working on the vertex with index n. So in your example the vertex #0 has 5 influences, the vertex #1 has 5 influence, and so on. The numbers in vertex_weight/v array are to be interpreted as pairs. In your example the sequence of pairs is (0,1), (1,2), (2,3), (3,4), (4,5), (0,6), and so on. The first value of each pair denotes the bone (with the exception that a value -1 denotes the bind shape). The second value of each pair denotes the index into the weights array. Beginning with vertex #0, read the first number from vertex_weight/vcount as n, and then read the first n pairs from vertex_weight/v and interpret them being all influences on vertex #0. Then the next vertex, read the next number from vertex_weight/vcount as n, and then read the next n pairs from vertex_weight/v and interpret them as all influences on that vertex. Continue this for all remaining vertices.
4. ## Algorithm Logic in ECS?

But collision may change position and/or velocity of your player entity, wouldn't it? Hence is has been settled after collision.

6. ## Algorithm Turn based - Line of Effect for AI

Then obviously the full scan need to be replaced by a sparse scan. How many "combinations" of opponents may occur? E.g. an algorithm like this: for the current AI agent calculate the (squared) distance to all opponents on the map sort the list by distance foreach opponent in the list do if opponent is known to see the current agent break if opponent is known to not see the current agent continue determine and store visibility(agent, opponent) by casting a ray if visible break
7. ## C++ comparing 2 floats

At least: You need to use the absolute value of the difference, something like if (fabs(a-f) < epsilon) or else you check whether a is greater than f and is close to f or a is lesser than f regardless how close.
8. ## Algorithm Turn based - Line of Effect for AI

The opening post allows for much freedom, because it specifies no constraints or optimizations that may already be working. So I throw in some thoughts an hope they help ... Static geometry defines a-priorily known combinations of tiles between which LOS will never work. Hence baking a list of possible target tiles into each tile gives a subset of where further calculations are meaningful at all. This may not work well in case that the "sight" is very far. A variation of the above may be to bake a list of angle ranges describing direction that are blocked (or unblocked). This may work better in case that the "sight" is very far, or different characters have different demands on "sight". Pairs of tiles need to be investigated only if the one tile is occupied by the AI agent itself and the other is occupied by a (living) member of a hostile party. The effect range may restrict the distance of tiles that can be reached. Looking further than that would be meaningless. An order like scanning an inner ring (around the AI agent of interest) of tiles before scanning the next outer ring allows the algorithm to stop at the first hit; you're looking for the "nearest" tile, right? The computations are bi-directional. If an uninterrupted LOS is found from agent A to agent B, then it is also valid from agent B to agent A (but may perhaps be qualified with another view distance and/or effect range). Other way around the same: If agent A cannot see agent B, then agent B cannot see agent A.
9. ## Converting 3D Points into 2D

There are some things that make understanding your post problematic: A "center" is a point in space. How should "0,0 to -1,1" be understood in this context? You probably mean "a point in range [-1,+1]x[-1,+1]". Otherwise, you can normalize a point (with the meaning to make its homogeneous coordinate to be 1), but that is - again probably - not what you mean, is it? A position can be given in an infinite amount of spaces. Because you're asking for a transformation of a position from a specific space into normalized space, it is important to know what the original space is. Your code snippet show "out.pointlist" without giving any hint in which space the points in that pointillist are given. Are they given in model local space, or world space, or what? In the end I would expect that perhaps the model's world matrix and essentially the composition of the camera's view and projection matrixes are all that is needed to do the job. You already fetch viewProjection by invoking getCameraViewProjectionMatrix() (BTW a function I did not found in Ogre's documentation). What is wrong with that matrix? What's the reason you are not using it?

11. ## Questions on organization of draw loop

Absolutely, although I would not say that services are read-only per se, but they are mostly read-only. Notice please that the S in ECS stands for "system" (or sub-system in this manner). This makes it distinct from component based entity implementations without (sub-)systems. The purpose of such systems is to deal with a specific more-or-less small aspect of an entity, which is given by one or at most a small number of what is called the "components". The (sub-)systems do this in a bulk operation, i.e. they work on the respective aspect for all managed entities in sequence. If this is done then we have an increment of the total state change done for all entities, and this is the basis for the next sub-sequent sub-system to stack up its own increment. You're right: This of course works if and only if the sub-systems are run in a defined order. That is the reason for the described structure of the game loop. Well, having a defined order is not bad. A counter-example: When a placement of an entity is updated, running a collision detection immediately is not necessarily okay, because that collision detection may use some other entities with already updated placements and some with not already updated placements. The result would be somewhat incomplete. You may want to read this Book Excerpt: Game Engine Architecture I'm used to cite at moments like this. However, this high level architectural decision does not avoid e.g. message passing at some lower level. When message passing is beneficial at some point, let it be the tool of choice.
12. ## Questions on organization of draw loop

Mostly true, but there are some misunderstandings. The Model is a container class with a list of components. The sum of all the specific types of the components together with the therein stored parametrization constitutes an entity (or game object; I'm using these both terms mostly equivalent). The Model instance is used only during the entity creation process. It is a static resource, so it will not be altered at any time. Hence I wrote that its role is being a recipe, because it just allows the set of belonging sub-systems to determine what they have to do when creating or deleting an entity that matches the Model. The entity management sub-system does not know how to deal with particular components. It just knows that other sub-systems need to investigate the Model's components during the creation and deletion process, and that those sub-systems will generate identifiers for the respective inner structures that will result from components. Each sub-system that itself deals with a component of a Model has some kind of internal structure that is initialized accordingly to the parameters of the component. This inner structure will further be the part that is altered during each run through the game loop. Hence this inner structure is a part of the active state of an entity. Notice that this is a bit different to some other ECS implementations. Here we have a Model and its components, and we have an entity with its - well, so to say - components. There is some semantic coupling between both kinds of components, but that's already all coupling that exists. So the entity manager just knows how many entities are in the world, how many entities will be created or deleted soon, and which identifiers are attached to them. Even if the Model is remembered, the entity manager has no understanding of what any of its components means. EDIT: Well, having so much posts in sequence makes answering complicated. I've the feeling that some answers I've given here are already formulated by yourself in one of the other posts...