Same point in another coordinate system

Started by
4 comments, last by JohnnyCode 9 years, 1 month ago

Hi,

I have a light source with a direction D = (d1, d2, d3) and a point P = (p1, p2, p3) in the standard cartesian coordinate system. I want to construct another coordinate system whose z-axis' direction is D. The origin should lie in the standard coordinate system's origin and it should be a cartesian coordinate system. The direction of the x-axis doesn't matter.

I am a little confused because both coordinate systems are left-handed (because of D3D11).

How can I construct the coordinate system? And how can I compute the position of P in the new coordinate system? Could I simply compute a bounding box which contains some points (edges of a viewing frustum) and compute the position of the light source in the new coordinate system and transform it back to the standard coordinate system?

I already have some code, but it doesn't work as intended (the focus position (in standard coords) lies above the camera, it should lie below the camera because the light's y-direction is negative, the viewing frustum (in standard coords) is correct).


void DirectionalLight::Render(ID3D11DeviceContext* DeviceContext, Camera_t* Camera, DOUBLE3 Direction, RenderFunction_t RenderFunction){
	DeviceContext->RSSetViewports(1, &Viewport);
	float CameraNear = Camera->GetNear(), CameraFar = Camera->GetFar(), FoV_rad=Camera->GetFoVRadians(), ViewRatio=Camera->GetViewRatio();
	double LengthPerUnit = 1.0 / (pow(2, NumTextures) - 1) * (CameraFar-CameraNear);
	uint64_t CurrentDistance = 0; //0 to (2^NumTextures)-1
	DOUBLE3 CameraUp = Camera->GetUpVector(), CameraRight = Camera->GetRightVector();
	DOUBLE3 CameraPosition = Camera->GetPosition();
	DOUBLE3 CameraDirection = Camera->GetDirection();

	double LightAngleY, LightAngleZ;
	double LightLength = Direction.Length();
	LightAngleY = acos(Direction.z / Direction.Length());
	LightAngleZ = 
		(Direction.y > 0) ? 
			atan(Direction.x / Direction.y) + 3*M_PI_2
		:
		(
			(Direction.y < 0) ? 
				atan(Direction.x / Direction.y) + M_PI_2 
			:
				(Direction.x > 0 ? 0 : M_PI)
		);

	XMMATRIX RotationMatrix = XMMatrixMultiply(XMMatrixRotationY(LightAngleY), XMMatrixRotationZ(LightAngleZ));
	XMMATRIX RotationInverse = XMMatrixMultiply(XMMatrixRotationZ(-LightAngleZ), XMMatrixRotationY(-LightAngleY));

	for (unsigned int i = 0; i < NumTextures; ++i){
		uint64_t CurrentSize = (uint64_t)pow(2, i);

		DeviceContext->ClearDepthStencilView(DepthStencilViews[i], D3D11_CLEAR_DEPTH, 1.0f, 0);
		DeviceContext->OMSetRenderTargets(0, 0, DepthStencilViews[i]);
		Frustum LightViewFrustum;
		
		double CurrentNear = CurrentDistance*LengthPerUnit + CameraNear;
		double CurrentFar = (CurrentDistance + CurrentSize)*LengthPerUnit + CameraNear;

		DOUBLE3 NearCenter = CameraPosition + CurrentNear * CameraDirection;
		DOUBLE3 FarCenter = CameraPosition + CurrentFar * CameraDirection;
		
		double NearHeight2 = tan(FoV_rad / 2) * CurrentNear;
		double FarHeight2 = tan(FoV_rad / 2) * CurrentFar;
		double NearWidth2 = NearHeight2 * ViewRatio;
		double FarWidth2 = FarHeight2 * ViewRatio;

		XMFLOAT3 Edges[8];

		Edges[0] = FarCenter + (FarHeight2) * CameraUp - (FarWidth2) * CameraRight;
		Edges[1] = FarCenter + (FarHeight2) * CameraUp + (FarWidth2) * CameraRight;
		Edges[2] = FarCenter - (FarHeight2) * CameraUp - (FarWidth2) * CameraRight;
		Edges[3] = FarCenter - (FarHeight2) * CameraUp + (FarWidth2) * CameraRight;

		Edges[4] = NearCenter + (NearHeight2)* CameraUp - (NearWidth2)* CameraRight;
		Edges[5] = NearCenter + (NearHeight2)* CameraUp + (NearWidth2)* CameraRight;
		Edges[6] = NearCenter - (NearHeight2)* CameraUp - (NearWidth2)* CameraRight;
		Edges[7] = NearCenter - (NearHeight2)* CameraUp + (NearWidth2)* CameraRight;

		XMFLOAT3 Test;
		DOUBLE3 Min(std::numeric_limits<double>::infinity(), std::numeric_limits<double>::infinity(), std::numeric_limits<double>::infinity()), Max(-std::numeric_limits<double>::infinity(), -std::numeric_limits<double>::infinity(), -std::numeric_limits<double>::infinity());
		for (int j = 0; j < 8; ++j){
			XMStoreFloat3(&Test, XMVector3TransformCoord(XMLoadFloat3(&Edges[j]), RotationInverse));
			if (Min.x > Test.x){
				Min.x = Test.x;
			}
			if (Max.x < Test.x){
				Max.x = Test.x;
			}

			if (Min.y > Test.y){
				Min.y = Test.y;
			}
			if (Max.y < Test.y){
				Max.y = Test.y;
			}

			if (Min.z > Test.z){
				Min.z = Test.z;
			}
			if (Max.z < Test.z){
				Max.z = Test.z;
			}
		}

		XMFLOAT3 FocusPosition((Max.x + Min.x) / 2, (Max.y + Min.y) / 2, Max.z + 0.5f);
		XMStoreFloat3(&FocusPosition, XMVector3TransformCoord(XMLoadFloat3(&FocusPosition), RotationMatrix));

		XMFLOAT3 LightPosition = DOUBLE3(FocusPosition) - Distance*Direction;
		XMFLOAT3 Up(0.0f,1.0f,0.0f);
		XMStoreFloat3(&Up, XMVector3TransformCoord(XMLoadFloat3(&Up), RotationMatrix));


		XMMATRIX ViewMatrix = XMMatrixLookAtLH(XMLoadFloat3(&LightPosition), XMLoadFloat3(&FocusPosition), XMLoadFloat3(&Up)),
			ProjectionMatrix = XMMatrixOrthographicLH((Max.x - Min.x)*1.05, (Max.y - Min.y)*1.05, 0, Distance);

		
		ViewProjectionMatricesT[i] = XMMatrixTranspose(XMMatrixMultiply(ViewMatrix, ProjectionMatrix));
		LightViewFrustum.Update(Distance, ViewMatrix, ProjectionMatrix);
		
		RenderFunction(DeviceContext, &LightViewFrustum, ViewProjectionMatricesT[i]);

		CurrentDistance += CurrentSize;
	}
	
}
Advertisement

I think your terms might be a little mixed up, what you're describing are different "spaces" inside the same coordinate system. The point P is likely in "World Space" and you want to generate a matrix that transforms from "World Space" into "Light Space"

Understanding the differences between these spaces and how to transform between them is a huge task, but the general idea is to create a matrix where it's basis axes correspond to what you're trying do, using the lights direction as the forward axes, creating a "Look-At" matrix at point P.

I would recommend you read this as a start: http://www.codinglabs.net/article_world_view_projection_matrix.aspx

Perception is when one imagination clashes with another


what you're describing are different "spaces" inside the same coordinate system

How do you differentiate spaces from coordinate systems? I feel they are not 2 distinct entities but rather one is based on the other.

When I see people talking about spaces, I always think of those spaces being backed by some coordinate systems and this is essentially how we construct the matrices to transform something from one space to another.

When I see people talking about spaces, I always think of those spaces being backed by some coordinate systems and this is essentially how we construct the matrices to transform something from one space to another.

Pretty much. This is fundamentally a linear algebra problem. When mathematicians talk about a "space" in that context, they're effectively referring to a "coordinate system" of some kind. A 3D Cartesian coordinate system, for instance, is a vector space defined by the basis vectors { (1,0,0), (0,1,0), (0,0,1) }.*

OP, I second the idea of looking at look-at matrices and how to construct them.**

* A "basis" is a set of linearly independent vectors such that every vector (or "point") in your coordinate system can be represented with a linear combination of those vectors.

** Pun intended.

Just to set the record straight, mathematicians use language very close to the OP. We don't use the term "space" the way it's used in computer graphics.

There are several things mathematicians call a "space". The ones that would be useful for computer graphics are: affine space, Euclidean space and projective space. But these are objects that don't come with a coordinate system attached.


I have a light source with a direction D = (d1, d2, d3) and a point P = (p1, p2, p3) in the standard cartesian coordinate system. I want to construct another coordinate system whose z-axis' direction is D. The origin should lie in the standard coordinate system's origin and it should be a cartesian coordinate system. The direction of the x-axis doesn't matter.

As you notice, there is infinite amount of matricies that would comply to this. You search a matrix that transfroms vector 0,0,1 to (d1,d2,d3), that would be a 3x3 matrix whos last column is (d1,d2,d3). But as you notice, you have infinite amount of first and second column alternatives for this, what is actualy the X and Y basis vectors orientation in plane that (d1,d2,d3) is normal to.

If you pick a unit vector in this plane, than the last third remaining basis vector could be yielded from the two cross product. So if you consider D as a unit vector, pick a unit vector in the plane that D is normal to, and yield third vector as their cross product, you have then a 3x3 orthonormal rotation matrix. If you express point P as transformed from cartesian space by this matrix, than P will be expressed in this new rotated space (whose origin is 0,0,0 point as you suggested), and if you consider the transformed point P as expressed to its original cartesian space, it will be rotated(transformed) in its original cartesian space.

Yes, there is no "space" common termine in math, there is possibly vector space that is a set of vectors of scalar bodynthat conforms to axioms of a vector space.

This topic is closed to new replies.

Advertisement