DX11 Orthographic camera

This topic is 1308 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi Guys,

I am currently moving from DX9 (fixed function) to DX11. All is going well so far but now I am creating the camera system. But, my shader knowlegde is next to zero, so it is a bit different.

Primarily, I'll be making 2D applications (at this point in time) so I'll need an orthographic setup.

1 - Cheat and use dynamic vertex buffers and move everything manually.

2 - Setup a camera system.

Is #1 a valid one or is it purely a hack?

With #2 do I have to do this all with shaders or is there a way to do this with function calls? Could anyone point me in the right direction on how to go about this?

Share on other sites

#1: You could do it that way, but if you plan to do anything even moderately interesting with your camera, then you should consider option 2.

#2: It needs to be done in the shaders, as there is no fixed function pipeline anymore.  You can take a look at the old D3DX functions for inspiration in making the orthographic and view matrices though: D3DXMatrixOrthoLH and D3DXMatrixLookAtLH.

Share on other sites

I was fearing that might be the case.

I'll have to hit it head on and take up the challenge then.

Share on other sites
What Jason Z said.

Option 1 is how we did things before DirectX 7 introduced hardware TnL (transforms the vertices on the CPU every frame and send them through GPU, unviable past certain vertex count)

Option 2 (use shaders) is basically the same as Option 1 but the code runs on the GPU, hence no need to send the data every frame. It's already there.

Vertex Shaders are quite easy. Just think about it as a little program that gets executed for each vertex. One vertex in, one transformed vertex out (and each program execution can't see the contents of the other neighbouring vertices).

I've done both Options (option 1 a long, long time ago) and writting a vertex shader was just easier and quicker. Don't be scared of it just because you don't know it ;)

Share on other sites

What Jason Z said.

Option 1 is how we did things before DirectX 7 introduced hardware TnL (transforms the vertices on the CPU every frame and send them through GPU, unviable past certain vertex count)

Option 2 (use shaders) is basically the same as Option 1 but the code runs on the GPU, hence no need to send the data every frame. It's already there.

Vertex Shaders are quite easy. Just think about it as a little program that gets executed for each vertex. One vertex in, one transformed vertex out (and each program execution can't see the contents of the other neighbouring vertices).

I've done both Options (option 1 a long, long time ago) and writting a vertex shader was just easier and quicker. Don't be scared of it just because you don't know it ;)

Thanks Matias,

Just trying to sift through all of the information I can google right now.

Do you know of any good links for this subject (preferably just ortho if possible).

Thanks again.

Share on other sites

After a lot of reading and googling I now have this in my render loop.

 Totally changed from what I posted before

I think I am close now.

This is my render loop...

// Start Frame

float clearColor[4]={0.5f,0.5f,1.0f,1.0f};
d3dContext->ClearRenderTargetView(d3dBackBufferTarget,clearColor);

// tell DX11 to use this shader for the next renderable object
d3dContext->PSSetSamplers(0,1,&colorMapSampler);

// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);	// 800 x 600
viewMatrix=XMMatrixTranspose(viewMatrix);		// What is this for?
projMatrix=XMMatrixTranspose(projMatrix);		// What is this for?

// position the object
XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*256.0f,1.0f*256.0f,1.0f );	// use variables later
XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);		// position at 0,0,0
XMMATRIX worldMat=scaleMatrix*translationMatrix;

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&worldCB);
d3dContext->VSSetConstantBuffers(2,1,&worldCB);

// Render Geometry
UINT stride = sizeof(VERTEX);
UINT offset = 0;
d3dContext->IASetInputLayout(pLayout);
d3dContext->IASetVertexBuffers(0,1,&pVBuffer,&stride,&offset);
d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);

d3dContext->Draw(4,0);

Texture2D colorMap_ : register( t0 );
SamplerState colorSampler_ : register( s0 );

cbuffer cbChangesEveryFrame : register(b0)
{
matrix worldMatrix;
};

cbuffer cbNeverChanges : register(b1)
{
matrix viewMatrix;
};

cbuffer cbChangeOnResize : register(b2)
{
matrix projMatrix;
}

struct VS_Input
{
float4 pos  : POSITION;
float2 tex0 : TEXCOORD0;
};

struct PS_Input
{
float4 pos  : SV_POSITION;
float2 tex0 : TEXCOORD0;
};

{
PS_Input vsOut = ( PS_Input )0;

vsOut.pos=mul(vertex.pos,worldMatrix);
vsOut.pos=mul(vsOut.pos,viewMatrix);
vsOut.pos=mul(vsOut.pos,projMatrix);

// vsOut.pos = vertex.pos;
vsOut.tex0 = vertex.tex0;

return vsOut;
}

float4 PShader( PS_Input frag ) : SV_TARGET
{
return colorMap_.Sample( colorSampler_, frag.tex0 );
}

The problem that I have not is that when I run the code I get an exception and the debug window reports this

ID3D11DeviceContext::UpdateSubresource: First parameter is corrupt or NULL [ MISCELLANEOUS CORRUPTION #13: CORRUPTED_PARAMETER1]

Which relates to this code...

d3dContext->UpdateSubresource(worldCB,0,0,&worldMat,0,0);
d3dContext->UpdateSubresource(projCB,0,0,&projMatrix,0,0);

Am I on the right track with how I am going about this? And any help as to what this error is would be awesome

Edited by DarkRonin

Share on other sites
A quick update.

I didn't realise I had to create the buffers so I added this before the render loop.

	ID3D11Buffer* viewCB=0;
ID3D11Buffer* projCB=0;
ID3D11Buffer* worldCB=0;

D3D11_BUFFER_DESC constDesc;
ZeroMemory(&constDesc,sizeof(constDesc));
constDesc.BindFlags=D3D11_BIND_CONSTANT_BUFFER;
constDesc.ByteWidth=sizeof(XMMATRIX);
constDesc.Usage=D3D11_USAGE_DEFAULT;

if(FAILED(d3dDevice->CreateBuffer(&constDesc,0,&viewCB)))
return 1;

if(FAILED(d3dDevice->CreateBuffer(&constDesc,0,&projCB)))
return 2;

if(FAILED(d3dDevice->CreateBuffer(&constDesc,0,&worldCB)))
return 3;

Things are better now, but the render results are not as expected. Hard to describe at the moment.

So, I'll play with the code a bit more to get my head around what is going on.

Share on other sites
This is what I get if I attempt to translate the sprite to 1,0 (supposedly in pixel co-ordinates). I also had to change the scale back to 1 (instead of 256 - the sprite width & height).

XMMATRIX scaleMatrix=XMMatrixScaling(1.0f,1.0f,1.0f );  // scale 256 was giving a black screen as something wrong here
XMMATRIX translationMatrix=XMMatrixTranslation(1.0f,0.0f,0.0f); // position at 1,0,0 
I am a lot closer than I was this morning. So, I am pretty happy that something is now happening. But, I am unsure whether the problem is in the shader or the code (both in a previous post).

XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);


...doesn't seem to make any effect at all.

This is what I am seeing when translation is 1,0,0

Share on other sites

looks like you have your matrix multiplication reversed.

A * B * C != C * B * A

reverse the order.

Share on other sites

looks like you have your matrix multiplication reversed.

A * B * C != C * B * A

reverse the order.

In the shader do you mean?

I just tried reversing it and it gave the same result

One thing I did notice in my code I had...

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&worldCB);
d3dContext->VSSetConstantBuffers(2,1,&worldCB);

So, I have changed that to...

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&viewCB);
d3dContext->VSSetConstantBuffers(2,1,&projCB);


But now I get a blank screen.

Is the way I am setting the constant buffers correct?

[edit 2]
Actually the scaling had to be re-adjusted as a result (so I added the 256 multiplication back in where I took out before).

So, we are even closer.

This is the current result...

...which is closer to what I'd expect to see. But, it looks like the 0 on the y-axis is on the bottom of the screen instead of the top (probably something with my ortho setup) and I am getting an unwanted skew.

But, still on the right track I think.

Thanks for the help so far too guys

Edited by DarkRonin

Share on other sites

I would recommend that you work with one matrix at a time until you get each of them proven out.  Start with only your projection matrix, and make the other two identity matrices.  You should be able to draw your sprite within the bounds of your orthographic camera volume, and you should see it appear even if the view and world matrices are identity.

If you can get that working, I think the rest will be manageable...

Also, you have a comment up above about why you would transpose a matrix.  The reason is that the row / column order by default is not the same for the GPU and for the CPU (at least for their runtimes...) so you either have to transpose the matrices, or you can compile your shaders with an additional flag that flips the row / column order.

EDIT: Do you also have the debug device enabled?  When you create your D3D11 device, use the flag D3D11_CREATE_DEVICE_DEBUG.  You will get warnings if you try to use the API incorrectly, which can be really helpful too.

Edited by Jason Z

Share on other sites

Thanks again,

Yeah, I have debug device enabled and surprisingly no errors / warnings at all (I have seen a few of those today - LOL)

I just made the view and world matrices 'identity' and I get a 1 pixel dot at bottom left of the screen (which is roughly what i'd expect due to the removal of scaling, except for bottom being zero and top being 640 - that seems to be upside down).

Not sure where to go from here though.

Here is my current code...

// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f); // 800 x 600
viewMatrix=XMMatrixTranspose(viewMatrix);
projMatrix=XMMatrixTranspose(projMatrix);

// position the object
//XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*256.0f,1.0f*256.0f,0.0f);
//XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);
//XMMATRIX worldMat=scaleMatrix*translationMatrix

XMATRIX worldMat=XMMatrixIdentity();

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&viewCB);
d3dContext->VSSetConstantBuffers(2,1,&projCB);

Edited by DarkRonin

Share on other sites
Ok, I made some decent progress. The code now works perfectly, except the fact that the y axis starts from the bottom of the screen and upwards is positive.

// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);		// 800 x 450
viewMatrix=XMMatrixTranspose(viewMatrix);
projMatrix=XMMatrixTranspose(projMatrix);

// position the object
XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*128.0f,1.0f*128.0f,0.0f);  // This is correct for 256px sprite as verts are 1 to -1 (fix later)
XMMATRIX rotationMatrix=XMMatrixRotationZ(0.0f);
XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);
XMMATRIX worldMat=scaleMatrix*rotationMatrix*translationMatrix;
worldMat=XMMatrixTranspose(worldMat);

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&viewCB);
d3dContext->VSSetConstantBuffers(2,1,&projCB);

So, overall I am pretty happy as I had no clue on shaders and DX11 just 24 hours ago. If I can nail this last issue (the y axis upside down) I'll be extremely happy.

Share on other sites

If you want to invert your y axis, just scale by a negative number along that axis.  Just remember that you will be flipping everything along there, so you will need to also apply a translation accordingly to put the object where you want it to be.

Share on other sites
Thanks Jason.

I also got around it using this method as well

XMMATRIX translationMatrix=XMMatrixTranslation(128.0f,(float)height-128.0f,0.0f);

(float)height being the height of the window. Just seemed a little hacky though as all of the documentation I can find says that 0,0 should be top-left of the screen.

At least I have a working solution though.

But, on the bright side. I came here yesterday with a blank window and now I have a working camera system, so I am glad I persisted with the shader method rather than persisting with dynamic vertex buffers. In the end, I know it was a far better method to go this way

Thanks for all of the help and guidance along the way too.

No doubt, I'll be posting in a weeks time asking how to do pixel perfect collisions (I am shuddering allready - LOL). Edited by DarkRonin

• Similar Content

• Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
double clicked on the frame to open it, but no idea where to go from there.

I've been searching for hours and there's no information on this, not even on the Microsoft Website!
They say "open the  Graphics Pixel History window" but there is no such window!
Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?

All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated

• I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.

Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.

• By xhcao
Is Direct3D 11 an api function like glMemoryBarrier in OpenGL? For example, if binds a texture to compute shader, compute shader writes some values to texture, then dispatchCompute, after that, read texture content to CPU side. I know, In OpenGL, we could call glMemoryBarrier before reading to assure that texture all content has been updated by compute shader.
How to handle incoherent memory access in Direct3D 11? Thank you.
• By _Engine_
Atum engine is a newcomer in a row of game engines. Most game engines focus on render
techniques in features list. The main task of Atum is to deliver the best toolset; that’s why,
as I hope, Atum will be a good light weighted alternative to Unity for indie games. Atum already
has fully workable editor that has an ability to play test edited scene. All system code has
simple ideas behind them and focuses on easy to use functionality. That’s why code is minimized
as much as possible.
Currently the engine consists from:
- Scene Editor with ability to play test edited scene;
- Powerful system for binding properties into the editor;
- Render system based on DX11 but created as multi API; so, adding support of another GAPI
is planned;
- Controls system based on aliases;
- Font system based on stb_truetype.h;
- Support of PhysX 3.0, there are samples in repo that use physics;
- Network code which allows to create server/clinet; there is some code in repo which allows
to create a simple network game
I plan to use this engine in multiplayer game - so, I definitely will evolve the engine. Also
I plan to add support for mobile devices. And of course, the main focus is to create a toolset
that will ease games creation.
Link to repo on source code is - https://github.com/ENgineE777/Atum
Video of work process in track based editor can be at follow link:

• I made a spotlight that
1. Projects 3d models onto a render target from each light POV to simulate shadows
2. Cuts a circle out of the square of light that has been projected onto the render target
as a result of the light frustum, then only lights up the pixels inside that circle
(except the shadowed parts of course), so you dont see the square edges of the projected frustum.

After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95
to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value,
which should range between .95 and 1.0.

This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However,
there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander