# Using XMMatrixLookAtLH - trying to face a point, but doesn't work/crashes?

## Recommended Posts

I'm using this code for my Entity class:


void Entity::Face(FXMVECTOR target)
{
XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);

up = XMVector3Transform(up, Rotation);

Rotation = XMMatrixLookAtLH(Translation, target, up); //Rotation is a XMMATRIX, Translation is a XMVECTOR
}

And trying to get it to face another entity.Basically I'm doing this:

Entity a = //....
Entity b = //.......
a.Face(b.Translation);

And what happens is one of 3 things:
1.The function crashes, because of an assertion
2.Entity 'a' is misplaced in a weird position
3.Entity 'a' dissapears

This damn function XMMatrixLookAtLH has been the source of problems all over my project, everywhere I used it I had to spend hours to get it to work properly.

If I create the rotation matrix with XMMatrixRotationRollPitchYawFromVector, it works perfectly, however I NEED to get it to face directly at the point it's been given and I don't know how to get it to work.Please someone give me an advice.

##### Share on other sites
#1: I don’t know what an FXMVECTOR is but the inputs to XMMatrixLookAtLH() have to be aligned on 16-byte addresses.

#2, #3:“up” should be [0,1,0].  Remove the XMVector3Transform() line. And the rest is because you are for whatever reason writing a member called “Rotation”. Unless that is your object’s world matrix (in which case it should be named so), you are doing it wrong.

I am expecting you have a Rotation matrix, and Translation and Scale vectors, and these combine to form a world matrix.
XMMatrixLookAtLH() returns the world matrix, so either rename Rotation to World or put the correct matrix there.
But mainly remove “up = XMVector3Transform(up, Rotation);”. It is doing nothing but bugs.

L. Spiro Edited by L. Spiro

##### Share on other sites

Ok so basically this is my method so far:

//Insite the entity there is:
//XMVECTOR Translation
//XMVECTOR Scale
//XMMATRIX Rotation

void Entity3D::Face(const Entity3D& target)
{
XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);

Transform = XMMatrixScalingFromVector(Scale) * XMMatrixLookAtLH(Translation, target.Translation, up);

if(HasParent())
{
Transform *= _parent->Transform;
}

Sphere.Transform(Sphere, Transform);
}

I'm calling this so I can get one entity to face another, however I'm not sure where to put the Scale and Translation.What I was originally doing was using Polar2Cartesian coordinates to get cubes to rotate around a cube and it worked well, but I need them to rotate and face the central cube, however when I call the above method, half of them get clumped on the left of it and half of them get clumped on the right of the central cube(if it doesn't crash that is).

edit:the reason I used Rotation in the first post was, because my idea was basically to change the entity's rotation matrix such that it's facing the other entity.I can change it properly with XMMatrixRotationRollPitchYaw, however I can't possibly know the roll, pitch and yaw required for each cube to face the central cube.
Edited by mrheisenberg

##### Share on other sites

1.) Both the looking and the looked at entities' co-ordinate frames must be given w.r.t. the same reference, or else the resulting transform will be meaningless. Although not being a proof, the branch using HasParent() let me assume that "this" and "target" probably may violate those condition.

The solution to this is to first compute the source and target positions in (typically) global space, and apply the "look at" onto them. If needed, transform the result back into some local space afterwards.

2.) It seems that you are using the local origin for both source and target position. Rotating and/or scaling (0,0,0) has no effect onto the point (if applied before any translation), hence are identity mappings. That means  that none of them need to be considered.

However, when computing the frame w.r.t. the world as described in 1.) then scaling and rotating is incorporated already.

Edited by haegarr

##### Share on other sites

1.) Both the looking and the looked at entities' co-ordinate frames must be given w.r.t. the same reference, or else the resulting transform will be meaningless. Although not being a proof, the branch using HasParent() let me assume that "this" and "target" probably may violate those condition.

The solution to this is to first compute the source and target positions in (typically) global space, and apply the "look at" onto them. If needed, transform the result back into some local space afterwards.

2.) It seems that you are using the local origin for both source and target position. Rotating and/or scaling (0,0,0) has no effect onto the point (if applied before any translation), hence are identity mappings. That means  that none of them need to be considered.

However, when computing the frame w.r.t. the world as described in 1.) then scaling and rotating is incorporated already.

Isn't Translation in global/world space?The HasParent() branch simply updates the entity's transform to be relative to the one of the parent, so if the parent moves, the entity will be moved with it.

##### Share on other sites

Isn't Translation in global/world space?

Translation is relative to a reference. Whether this is the world space or any other space is not explicitly stated by the transformation but by the context it is used in. The same is true for any other transformation, inclusive any composed one. There is nothing like a tag on a "translation" that tells whether it is in model space, world space, view space, an object's local space, tangent space, or-what-not space.

An example: An entity has a position, orientation, and size (what all are better names compared to translation, rotation, and scaling, w.r.t. what they describe). You, as the programmer, define that they are always to be interpreted w.r.t. the world space. That is fine because so the entities' transforms all have a defined common space. Now, if an entity should follow forward kinematics by attaching it to a parent entity, the child entity gets both a pointer to the parent and another tuple of position, orientation, and perhaps scaling, but this time as localPosition, localOrientation, localScaling defined to be relative to the parent entity.

If you now want to compute a geometrical relation of those child entity to some other entity, you can do so only if all transforms involved are given w.r.t the same reference. That means to first have to ensure that the world position (and so on) of the child entity are up-to-date, what may mean to request the world position (and so on) from the parent entity and re-compute the own world position (and so on) from it, e.g. when using row vectors:

this->transform = this->localTransform * parent->transform()

where each transform is expected to be computed as product

matrix(size) * matrix(orientation) * matrix(position)

If done so, then a relation can be computed

facing_matrix = facingFromTo(source.transform, target.transform);

where it is still to be remembered that facing_matrix is given w.r.t. the world space.

The HasParent() branch simply updates the entity's transform to be relative to the one of the parent, so if the parent moves, the entity will be moved with it.

Yes, but it does so at the wrong moment. See above.

##### Share on other sites

If you now want to compute a geometrical relation of those child entity to some other entity, you can do so only if all transforms involved are given w.r.t the same reference. That means to first have to ensure that the world position (and so on) of the child entity are up-to-date, what may mean to request the world position (and so on) from the parent entity and re-compute the own world position (and so on) from it, e.g. when using row vectors:

this->transform = this->localTransform * parent->transform()

where each transform is expected to be computed as product

matrix(size) * matrix(orientation) * matrix(position)

If done so, then a relation can be computed

facing_matrix = facingFromTo(source.transform, target.transform);

where it is still to be remembered that facing_matrix is given w.r.t. the world space.

I changed the code to be like that:

void Entity3D::Face(const Entity3D& target)
{
XMMATRIX matrixScale = XMMatrixScalingFromVector(Scale);
XMMATRIX matrixTranslation = XMMatrixTranslationFromVector(Translation);

Transform = matrixScale * Rotation * matrixTranslation; //Creates the transform

if(HasParent())
{
Transform *= _parent->Transform; //Updates in relation to parent
}

XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
up = XMVector3Transform(up, Rotation);

Transform = XMMatrixLookAtLH(Transform.r[3], target.Transform.r[3], up); //Creates the final matrix}

It still doesn't work properly.Did I mess up the order?It seems right to me.Maybe there's another way to set 2 entities to face each other?I'm not entireli familiar with the math behind XMMatrixLookAtLH, so I'm not sure what's going on.

Here are 3 pictures of the:

The first two happen when I call:
void Entity3D::Update()
{
XMMATRIX matrixScale = XMMatrixScalingFromVector(Scale);
XMMATRIX matrixTranslation = XMMatrixTranslationFromVector(Translation);

Transform = matrixScale * Rotation * matrixTranslation;

if(HasParent())
{
Transform *= _parent->Transform;
}
}

Right after Face(); So if might be logical to comment out Update() and only use face?So here is what happens when I only use Face():

As you can see no matter at whan angle I arrange the cubes around the mother cube and the even smaller ones, when I call Face() they don't face the mother, they just clump up.What I want is for them to face it in a way that the new ones don't intersect the mother cube.This is what I want to achieve:
Edited by mrheisenberg

##### Share on other sites

That is … an interesting arrangement ;)

1.) XMMatrixLookAtLH, according to the documentation, creates a view matrix. This is usually a transformation from global space into camera local (a.k.a. view) space. I'm not totally sure about this because I'm working less with D3D. If it does so then it is of course not suitable to place the object into the world (if you use transform for those purpose). You can check it when looking at the position component vector (r[3], I believe): Is it negated compared to the position vector stored within transform just before XMMatrixLookAtLH is applied?

2.) As L. Spiro has mentioned above, transforming the "up" vector should be dropped. XMMatrixLookAtLH will use the "up" vector as corse orientation only; the correct up vector will be computed inside.

However, it seems me that the problem should be encountered in another way at all. But I'm not quite sure what to want to reach. If you used polar co-ordinates before, I would conclude that you actually want an oriented orbiting, because polar co-ordinates deal with a radius independent on the angle of orientation.

##### Share on other sites

However, it seems me that the problem should be encountered in another way at all. But I'm not quite sure what to want to reach. If you used polar co-ordinates before, I would conclude that you actually want an oriented orbiting, because polar co-ordinates deal with a radius independent on the angle of orientation.

I'm actually using cubes so I can more easily see their rotation, for the final scene I'm trying to render a sphereflake fractal like this one:

As you can see, each sphere is rotated according to the center of its mother sphere, so the spheres never intersect/go inside eachother, however I couldn't find any guide on generating sphereflake positions online, mostly just raytracing articles.

Edited by mrheisenberg

##### Share on other sites

From what I've seen on the internet, the process is to recursively attach 9 spheres with 1/3 of the radius of the sphere on the previous level. The 9 sub-spheres are positioned in a ring of 3 spheres and another ring of 6 spheres.

1st ring, described in spherical co-ordinates

* the polar angle seems to be +50° from the equator (one source stated so but wasn't sure)

* the azimuthal angles are 30°, 150°, and 270°

2nd ring, described in spherical co-ordinates

* the polar angle seems to be -10° from the equator (one source stated so but wasn't sure)

* the azimuthal angles are 0°, 60°, 120°, 180°, 240°, 300°

Hence, in the local co-ordinate system of a sphere with given radius, the 9 sub-spheres are located at

idx = 0;
for(azimuth = 30; azimuth < 360; azimuth += 120) {
}
for(azimuth = 0; azimuth < 360; azimuth += 60) {
}


How to write sphericalToCartesian can be seen here on Wikipedia

##### Share on other sites

I'm actually using cubes so I can more easily see their rotation, for the final scene I'm trying to render a sphere flake...

System like those are usually generated following a prescription formed by a set of rules. For the example with the cubes (the red, yellow, green ones in one of the pictures) the rules may be:

1.) Start with a cube at level 0 and edge length L0.

2.) For a given cube at level N and with edge length L, there are 4 sub-ordinated (i.e. level N+1) cubes with edge length L/3, positioned

2a.) on the principal local axes +ex, -ex, +ez, -ez if N is 0

2b.) on the principal local axes +ey, -ez, +ey, -ez if N is odd

2c.) on the principal local axes +ex, -ey, +ex, -ey if N is even and greater than 0

each one at a distance of L * 4/3 from the super-ordinated cube's origin.
3.) For each sub-ordinated cube continue with 2.) until level M is reached.

The above set of rules doesn't use any explicit rotation but varies the placement instead (due to the fact that the cubes would be rotated in 90° increments). Another set of rules could be developed where the placement always uses the same local axes but computes a local rotation.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627681
• Total Posts
2978611

• 13
• 12
• 10
• 12
• 22