haegarr

Members

3989

7372 Excellent

• Rank
Contributor
1. make 2 perspective camera not to overlap

Perspective mode cameras have a view volume, too. The volume is a frustum of a pyramid (as opposed to a cuboid for orthographic mode cameras). It is "just" a matter of aligning the volumes.   In Unity a perspective mode camera has the parameter "field of view", which is the view's angle in degrees along the local y axis. Using the aspect ratio of the view, the corresponding angle for the local x axis can be calculated. Now, since the horizontal view angle is between the left side of the view and the right side, and exactly on the half between them is the line of sight, the angle between the lines of sight of two aligned cameras is just the horizontal view angle.   So create a new local co-ordinate system, put both cameras inside it (positioned at the local origin), rotate the "left" one by half the horizontal view angle to the left around the local y axis, rotate the "right" one the same but to the right, and let the camera controls alter the entire co-ordinate system.   Besides the simple approach above, more sophisticated things can be done by adapting the both projection matrices.
2. 3D card flip effect

At least possible:   a) Fake solution: Take the 4 corner positions of the mesh where the image is mapped onto. Translate them to be centered around the camera space origin, if needed. Ensure that they have a depth co-ordinate. Transform them with a perspective projection. Use the result in orthogonal space.   The above solution is a fake because it does distribute the image pixels evenly, i.e. it does not consider death shortening for the pixels. However, it will probably not be noticed, especially for an animation with decent speed.   b) Real solution: Do a render-to-texture with a full perspective projection, and render the result with an orthogonal projection onto the standard target.   This method will do depth shortening, but it more complex.
3. Question about composite nodes in behavior trees

Such an attempt to solve the problem is discussed here: Synchronized Behavior Trees   (BTW: That article also states my own opinion, that BTs are fine to execute elsewhere made decisions :))
4. Unity Need help programming car game

BTW: I'm *not* running Unity3D at the moment, so we need to solve the problem on a more theoretical basis.   to A: What I asked for is the ratio of the bounding box size to the area of damage. If the radius of the sphere that locates the damage is, say, 1.0, while the bounding is 2 by .4 by .7, then every applied damage has an influence on at least half of the vertices. Hence my question: Is the ratio low enough?   to B: Think of the radius being 0.3 when damage is inflicted to the front right corner of the car. The bounding box's size in x direction is 2, and it is covered by 2 faces along from front to back. Only the vertices at the front are located within the sphere of damage. What we would expect is that the region 0.7 to 1.0 along x direction is damaged. However, the mesh's face ranges from 0.0 to 1.0 in that direction, and hence the deformation appears over half of the length of the car although the relatively small damage sphere.   to C: "Normalized distance" means that it ranges from 0.0 to 1.0. Since the unnormalized value ranges from 0.0 to radius, dividing by radius is the calculation to be done for normalization.   to D: Well, when being done "correctly", then this kind of calculation will be removed totally.     It seems me worth to clarify first how exactly an inflicted damage should look like...
5. Unity Need help programming car game

Several things:   a) What is the ratio of the size of the bounding box of the mesh (enough "of" for now ;)) and the damage infliction?   b) From the video it seems me that the mesh has a too low resolution for deformation. Either use a higher resolution in advance or refine resolution locally just before applying the damage.   c) Damage infliction is done by applying a translation in direction of and proportional to the position of the vertex. That means that the intentional same damage factor applied to a vertex farther away from the origin has a higher impact, and that the bumps are not oriented correctly. What you probably want is to support a direction of damage (i.e. the damage is input as a vector with its length denoting the damage amplitude), scale that with one minus the normalized distance from the impact point, and add that to the vertex position.   d) Using a translation in range [-1,+1] gives inside and outside bumps; perhaps not what you want.
6. Find the minimum translation to fit one box inside another

For each dimension, with box A ranging from Amin to Amax, and B from Bmin to Bmax, then it should be sufficient to calculate: d = 0 if Amax < Bmax then d = Bmax - Amax // shifting A by a positive value to align with B's max edge if Amin > Bmin then d = Bmin - Amin // shifting A by a negative value to align with B's min edge (w/o having tested its correctness)
7. Question about composite nodes in behavior trees

What I wanted to say: Problems like the one mentioned in the OP may occur because the one tool at hand is not suitable for the job. Splitting the problem into sub-problems and using the correct (whatever that means in the given situation) tool for each one should be preferred.     But then none of the sequences on the way to the current node will be evaluated, because they are all 'running', and the BT won't be able to react to changes...? And same with Selectors.   Nodes work accordingly to their type. A selector node is allowed to work another way than a sequence node.
8. Question about composite nodes in behavior trees

When nodes are entered they are handled considering whether or not they are already running. A sequence node, for example, would not restart if it is running. I think that would do the trick.   However, the found issues are not new: When I once read about BTs they first sound to be a wonderful tool. Later on during studying existing implementations I saw 2 major drawbacks (where the 2nd one is not inherent to BTs by themselves):   a) BTs need crutches to get them work (halfway?) as expected, especially w.r.t. reaction. For example I saw "interruptable" nodes.   b) The same BT is used to do everything from decision making down to directly controlling the animation and placement of models. That's mixing of responsibilities, like once done with the nowadays overpowered scene graphs.   The conclusion I drew from this is once again: There is no tool to rule them all; instead, use the tool that is appropriate for solving the given single task. In detail:   a)  The strict left-to-right execution is not a good thing for decision making. In other words, decision making should be done by using another tool (currently I favor utility based AI for this thing).   b) BTs are good for executing (more or less) canned behaviors.     These are just my 2 Cents.
9. Animation State Machine Transitions Implementation

In my approaches transitions are generally modeled as states, and changing states happens instantaneously.   The example in the OP may then be modeled with a) a stable idle state; b) a transitional state that replays a canned animation to bring the skeleton into a defined motion phase with a defined speed; c) a transitional state that interpolates over time from the current speed to the goal speed, setting appropriate animation clips for blending; d) a stable state that continues the walk cycle with the reached speed.   Alternatively to c) and d) there could be: e) a stable state that regulates the speed in dependence on the difference between current speed and goal speed, hence running a continuous blending.   However, such a state machine is not only controlling animation since it also controls locomotion. Its granularity w.r.t. states is derived from animation needs, and hence does not reflect the difference between e.g. "still" and "moving" very well. When understood as component of a locomotion system, then states like b) can be seen as synchronization points where control is granted for a short time to the animation system. When control is returned, the state machine immediately enters state e).
10. arcball orientation

The mentioned keywords are "rotation matrix decomposition". Putting these into an internet search engine provides you with explanations, formulas, and also code.   However, as is also already indicated, the solution depends on how the axes and order of rotations is defined (together giving the composition of the total rotation matrix). Even the terms yaw / pitch / roll are not ever used clearly. So copying any given code may give results that are wrong for your situation.   This stuff is a conversion from spherical co-ordinates to cartesian co-ordinates (although using the greek letter rho for the radius / distance is somewhat misleading, since greek letters are by convention often used for angles). There is of course a reverse way as described e.g. here.   However, this is not the same as Euler angles, because spherical co-ordinates describe a position while Euler angles describe an orientation! Notice the mismatch of the radius / distance on the one side and the roll on the other side.   The view matrix is probably not even what you want to decompose. Traditionally the view matrix is the transformation from the world space into the camera space.   Not offending, but Alberth's is really what you should consider. Without an understanding of what the used code does, your project is about to fail rather sooner than later.     So ... what exactly is the "post-arcball mathematics" you want to do?
11. 3D vector rotations around an axis.

I think the following is meant. Perhaps not a totally correct explanation though...   Applying a non-identity rotation maps a couple of points onto themselves. These points build a line which is often called the axis of the rotation. Further, a rotation maps the point zero onto zero due to its multiplicative nature. In other words, the axis of rotation ever passes through zero. If this should not be the case, you need to temporarily shift the space so that the axis correctly passes through zero, apply the rotation, and undo the shift.     I.e. with p being a point where the axis should pass through, then  T( p ) * R * T( -p )  would do the trick, where T denotes a translation matrix and R the rotation matrix.