# DX11 When transforming a directional light, why to use XMVector3TransformNormal()?

## Recommended Posts

Hi,

When reading Frank Luna's book Introduction to 3D Game Programming with DX11, chapter 10 stenciling, the author made a demo to achieve mirror reflection effect. He made tranformation matrix and transform directional lights as following:

// Build reflection matrix to reflect the skull.
XMVECTOR mirrorPlane = XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f); // xy plane
XMMATRIX R = XMMatrixReflect(mirrorPlane);
XMMATRIX world = XMLoadFloat4x4(&mSkullWorld) * R;
…/
/ Reflect the light source as well.
// Cache the old light directions, and reflect the light directions.
XMFLOAT3 oldLightDirections[3];
for(int i = 0; i < 3; ++i)
{
oldLightDirections = mDirLights.Direction;
XMVECTOR reflectedLightDir = XMVector3TransformNormal(lightDir, R);
XMStoreFloat3(&mDirLights.Direction, reflectedLightDir);
}

But I think the direction of a light source is no more than a vector, not a normal. Why did the author use XMVector3TransformNormal to flip the lights? Thanks!

##### Share on other sites

XMVector3TransformNormal() leaves out any translation, which is appropiate for a direction vector

##### Share on other sites
21 minutes ago, vinterberg said:

XMVector3TransformNormal() leaves out any translation, which is appropiate for a direction vector

So calling XMVector3TransformNormal() doesn't mean to transform a "surface normal" by multiplying inverse transpose of M, "normal" is for ignoring translation, is it?

##### Share on other sites

The docs say so, yeas:

Quote

XMVector3TransformNormal performs transformations using the input matrix rows 0, 1, and 2 for rotation and scaling, and ignores row 3.

(row 3 is where translations are stored)

You normally achieve the same thing by setting your vector's w component to 0 or 1 (for ignore / don't ignore row 3), but I guess this function is more optimal as it's hard-coded to always ignore row 3.

##### Share on other sites
57 minutes ago, AireSpringfield said:

So calling XMVector3TransformNormal() doesn't mean to transform a "surface normal" by multiplying inverse transpose of M, "normal" is for ignoring translation, is it?

No. DirectXMath uses a completely wrong and misleading name. It has nothing to do with normals which typically require a special transform since they originate from cross products. Furthermore, normals are by definition normalized vectors, which would not be preserved by this method if you have an arbitrary uniform or non-uniform scale component.

XMVector3TransformNormal() transforms 3D directions (w-coordinate is implicitly set to 0), XMVector3TransformCoord() transforms 3D points (w-coordinate is implicitly set to 1).

XMVector3TransformPosition() and XMVector3TransformDirection() would be way better names. (You can create a wrapper function in the DirectX namespace.)

Edited by matt77hias

## Create an account

Register a new account

• 10
• 12
• 10
• 10
• 11
• ### Similar Content

• Hello!
Have a problem with reflection shader for D3D11:
1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
#include <D3Dcompiler.h>
#include <D3DCompiler.inl>
#pragma comment(lib, "D3DCompiler.lib")
//#pragma comment(lib, "D3DCompiler_47.lib")
As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?

• Hi there, this is my first post in what looks to be a very interesting forum.
I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering?

• I am trying to draw a screen-aligned quad with arbitrary sizes.

currently I just send 4 vertices to the vertex shader like so:
pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
pDevCon->Draw(4, 0);

then in the vertex shader I am doing this:
float4 main(uint vI : SV_VERTEXID) : SV_POSITION
{
float2 texcoord = float2(vI & 1, vI >> 1);
return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
}
that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
one thing I tried is:

float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);

.. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?

.
• By Stewie.G
Hi,
I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
Here are my passes:
D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

PS_BlurV

PS_BlurH

P0 + P1

As you can see, it does not work at all.
I think the issue is in my BlendState, but I am not sure.
I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)

Thanks!

• Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
I plainly don't like Unity or Unreal, but might learn them for reference.
So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
Is there a current and up to date list of learning resources anywhere?  I am about tired of 404s..