Sign in to follow this  
lonewolff

DX11 Orthographic camera

Recommended Posts

Hi Guys,

 

I am currently moving from DX9 (fixed function) to DX11. All is going well so far but now I am creating the camera system. But, my shader knowlegde is next to zero, so it is a bit different.

 

Primarily, I'll be making 2D applications (at this point in time) so I'll need an orthographic setup.

 

I figure can go about this two ways.

 

1 - Cheat and use dynamic vertex buffers and move everything manually.

2 - Setup a camera system.

 

Is #1 a valid one or is it purely a hack?

 

With #2 do I have to do this all with shaders or is there a way to do this with function calls? Could anyone point me in the right direction on how to go about this?

 

Thanks in advance smile.png

Share this post


Link to post
Share on other sites

#1: You could do it that way, but if you plan to do anything even moderately interesting with your camera, then you should consider option 2.

#2: It needs to be done in the shaders, as there is no fixed function pipeline anymore.  You can take a look at the old D3DX functions for inspiration in making the orthographic and view matrices though: D3DXMatrixOrthoLH and D3DXMatrixLookAtLH.

Share this post


Link to post
Share on other sites
What Jason Z said.

Option 1 is how we did things before DirectX 7 introduced hardware TnL (transforms the vertices on the CPU every frame and send them through GPU, unviable past certain vertex count)

Option 2 (use shaders) is basically the same as Option 1 but the code runs on the GPU, hence no need to send the data every frame. It's already there.

Vertex Shaders are quite easy. Just think about it as a little program that gets executed for each vertex. One vertex in, one transformed vertex out (and each program execution can't see the contents of the other neighbouring vertices).

I've done both Options (option 1 a long, long time ago) and writting a vertex shader was just easier and quicker. Don't be scared of it just because you don't know it ;)

Share this post


Link to post
Share on other sites

What Jason Z said.

Option 1 is how we did things before DirectX 7 introduced hardware TnL (transforms the vertices on the CPU every frame and send them through GPU, unviable past certain vertex count)

Option 2 (use shaders) is basically the same as Option 1 but the code runs on the GPU, hence no need to send the data every frame. It's already there.

Vertex Shaders are quite easy. Just think about it as a little program that gets executed for each vertex. One vertex in, one transformed vertex out (and each program execution can't see the contents of the other neighbouring vertices).

I've done both Options (option 1 a long, long time ago) and writting a vertex shader was just easier and quicker. Don't be scared of it just because you don't know it ;)

 

Thanks Matias,

 

Just trying to sift through all of the information I can google right now.

 

Do you know of any good links for this subject (preferably just ortho if possible).

 

Thanks again. smile.png

Share this post


Link to post
Share on other sites

After a lot of reading and googling I now have this in my render loop.
 
[edit] Totally changed from what I posted before
 
I think I am close now. smile.png
 
This is my render loop...
 

// Start Frame

float clearColor[4]={0.5f,0.5f,1.0f,1.0f};
d3dContext->ClearRenderTargetView(d3dBackBufferTarget,clearColor);

// tell DX11 to use this shader for the next renderable object
d3dContext->VSSetShader(pVS,0,0);
d3dContext->PSSetShader(pPS,0,0);
d3dContext->PSSetShaderResources(0,1,&colorMap);
d3dContext->PSSetSamplers(0,1,&colorMapSampler);

// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);	// 800 x 600
viewMatrix=XMMatrixTranspose(viewMatrix);		// What is this for?
projMatrix=XMMatrixTranspose(projMatrix);		// What is this for?
		
// position the object
XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*256.0f,1.0f*256.0f,1.0f );	// use variables later
XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);		// position at 0,0,0
XMMATRIX worldMat=scaleMatrix*translationMatrix;
		
d3dContext->UpdateSubresource(worldCB,0,0,&worldMat,0,0);
d3dContext->UpdateSubresource(viewCB,0,0,&viewMatrix,0,0);
d3dContext->UpdateSubresource(projCB,0,0,&projMatrix,0,0);

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&worldCB);
d3dContext->VSSetConstantBuffers(2,1,&worldCB);

// Render Geometry
UINT stride = sizeof(VERTEX);
UINT offset = 0;
d3dContext->IASetInputLayout(pLayout);
d3dContext->IASetVertexBuffers(0,1,&pVBuffer,&stride,&offset);
d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
		
d3dContext->Draw(4,0);

And this is my shader...
 

Texture2D colorMap_ : register( t0 );
SamplerState colorSampler_ : register( s0 );

cbuffer cbChangesEveryFrame : register(b0)
{
	matrix worldMatrix;
};

cbuffer cbNeverChanges : register(b1)
{
	matrix viewMatrix;
};

cbuffer cbChangeOnResize : register(b2)
{
	matrix projMatrix;
}

struct VS_Input
{
	float4 pos  : POSITION;
	float2 tex0 : TEXCOORD0;
};

struct PS_Input
{
	float4 pos  : SV_POSITION;
	float2 tex0 : TEXCOORD0;
};

PS_Input VShader( VS_Input vertex )
{
	PS_Input vsOut = ( PS_Input )0;

	vsOut.pos=mul(vertex.pos,worldMatrix);
	vsOut.pos=mul(vsOut.pos,viewMatrix);
	vsOut.pos=mul(vsOut.pos,projMatrix);

	// vsOut.pos = vertex.pos;
	vsOut.tex0 = vertex.tex0;

	return vsOut;
}

float4 PShader( PS_Input frag ) : SV_TARGET
{
    return colorMap_.Sample( colorSampler_, frag.tex0 );
}

The problem that I have not is that when I run the code I get an exception and the debug window reports this
 

ID3D11DeviceContext::UpdateSubresource: First parameter is corrupt or NULL [ MISCELLANEOUS CORRUPTION #13: CORRUPTED_PARAMETER1]


Which relates to this code...
 

d3dContext->UpdateSubresource(worldCB,0,0,&worldMat,0,0);
d3dContext->UpdateSubresource(viewCB,0,0,&viewMatrix,0,0);
d3dContext->UpdateSubresource(projCB,0,0,&projMatrix,0,0);

Am I on the right track with how I am going about this? And any help as to what this error is would be awesome smile.png

Edited by DarkRonin

Share this post


Link to post
Share on other sites
A quick update.
 
I didn't realise I had to create the buffers so I added this before the render loop.
 
	ID3D11Buffer* viewCB=0;
	ID3D11Buffer* projCB=0;
	ID3D11Buffer* worldCB=0;

	D3D11_BUFFER_DESC constDesc;
	ZeroMemory(&constDesc,sizeof(constDesc));
	constDesc.BindFlags=D3D11_BIND_CONSTANT_BUFFER;
	constDesc.ByteWidth=sizeof(XMMATRIX);
	constDesc.Usage=D3D11_USAGE_DEFAULT;

	if(FAILED(d3dDevice->CreateBuffer(&constDesc,0,&viewCB)))
		return 1;

	if(FAILED(d3dDevice->CreateBuffer(&constDesc,0,&projCB)))
		return 2;

	if(FAILED(d3dDevice->CreateBuffer(&constDesc,0,&worldCB)))
		return 3;
Things are better now, but the render results are not as expected. Hard to describe at the moment. smile.png

So, I'll play with the code a bit more to get my head around what is going on.

Share this post


Link to post
Share on other sites
This is what I get if I attempt to translate the sprite to 1,0 (supposedly in pixel co-ordinates). I also had to change the scale back to 1 (instead of 256 - the sprite width & height).
 
XMMATRIX scaleMatrix=XMMatrixScaling(1.0f,1.0f,1.0f );  // scale 256 was giving a black screen as something wrong here
XMMATRIX translationMatrix=XMMatrixTranslation(1.0f,0.0f,0.0f); // position at 1,0,0 
I am a lot closer than I was this morning. So, I am pretty happy that something is now happening. But, I am unsure whether the problem is in the shader or the code (both in a previous post).
 
Any adjustment to this line...
 
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);
 
...doesn't seem to make any effect at all.

This is what I am seeing when translation is 1,0,0
 
problem_dx11.png

Share this post


Link to post
Share on other sites

looks like you have your matrix multiplication reversed.
 
A * B * C != C * B * A
 
reverse the order.


In the shader do you mean?

I just tried reversing it and it gave the same result sad.png
 
[edit]
One thing I did notice in my code I had...
 

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&worldCB);
d3dContext->VSSetConstantBuffers(2,1,&worldCB);

So, I have changed that to...
 

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&viewCB);
d3dContext->VSSetConstantBuffers(2,1,&projCB);

But now I get a blank screen.

Is the way I am setting the constant buffers correct?

[edit 2]
Actually the scaling had to be re-adjusted as a result (so I added the 256 multiplication back in where I took out before).

So, we are even closer.

 

This is the current result...

 

problem_dx11_2.png

 

...which is closer to what I'd expect to see. But, it looks like the 0 on the y-axis is on the bottom of the screen instead of the top (probably something with my ortho setup) and I am getting an unwanted skew.

 

But, still on the right track I think.

 

Thanks for the help so far too guys smile.png
 

Edited by DarkRonin

Share this post


Link to post
Share on other sites

I would recommend that you work with one matrix at a time until you get each of them proven out.  Start with only your projection matrix, and make the other two identity matrices.  You should be able to draw your sprite within the bounds of your orthographic camera volume, and you should see it appear even if the view and world matrices are identity.

 

If you can get that working, I think the rest will be manageable...

 

Also, you have a comment up above about why you would transpose a matrix.  The reason is that the row / column order by default is not the same for the GPU and for the CPU (at least for their runtimes...) so you either have to transpose the matrices, or you can compile your shaders with an additional flag that flips the row / column order.

 

EDIT: Do you also have the debug device enabled?  When you create your D3D11 device, use the flag D3D11_CREATE_DEVICE_DEBUG.  You will get warnings if you try to use the API incorrectly, which can be really helpful too.

Edited by Jason Z

Share this post


Link to post
Share on other sites

Thanks again,

 

Yeah, I have debug device enabled and surprisingly no errors / warnings at all (I have seen a few of those today - LOL)

 

I just made the view and world matrices 'identity' and I get a 1 pixel dot at bottom left of the screen (which is roughly what i'd expect due to the removal of scaling, except for bottom being zero and top being 640 - that seems to be upside down).

 

Not sure where to go from here though.

 

Here is my current code...

 

// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f); // 800 x 600
viewMatrix=XMMatrixTranspose(viewMatrix);
projMatrix=XMMatrixTranspose(projMatrix);
 
// position the object
//XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*256.0f,1.0f*256.0f,0.0f);
//XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);
//XMMATRIX worldMat=scaleMatrix*translationMatrix
 
XMATRIX worldMat=XMMatrixIdentity(); 
 
d3dContext->UpdateSubresource(worldCB,0,0,&worldMat,0,0);
d3dContext->UpdateSubresource(viewCB,0,0,&viewMatrix,0,0);
d3dContext->UpdateSubresource(projCB,0,0,&projMatrix,0,0);
 
d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&viewCB);
d3dContext->VSSetConstantBuffers(2,1,&projCB);
Edited by DarkRonin

Share this post


Link to post
Share on other sites
Ok, I made some decent progress. The code now works perfectly, except the fact that the y axis starts from the bottom of the screen and upwards is positive.
 
// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);		// 800 x 450
viewMatrix=XMMatrixTranspose(viewMatrix);
projMatrix=XMMatrixTranspose(projMatrix);

// position the object
XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*128.0f,1.0f*128.0f,0.0f);  // This is correct for 256px sprite as verts are 1 to -1 (fix later)
XMMATRIX rotationMatrix=XMMatrixRotationZ(0.0f);
XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);
XMMATRIX worldMat=scaleMatrix*rotationMatrix*translationMatrix;
worldMat=XMMatrixTranspose(worldMat);

d3dContext->UpdateSubresource(worldCB,0,0,&worldMat,0,0);
d3dContext->UpdateSubresource(viewCB,0,0,&viewMatrix,0,0);
d3dContext->UpdateSubresource(projCB,0,0,&projMatrix,0,0);

d3dContext->VSSetConstantBuffers(0,1,&worldCB);
d3dContext->VSSetConstantBuffers(1,1,&viewCB);
d3dContext->VSSetConstantBuffers(2,1,&projCB);
So, overall I am pretty happy as I had no clue on shaders and DX11 just 24 hours ago. If I can nail this last issue (the y axis upside down) I'll be extremely happy.

Share this post


Link to post
Share on other sites

If you want to invert your y axis, just scale by a negative number along that axis.  Just remember that you will be flipping everything along there, so you will need to also apply a translation accordingly to put the object where you want it to be.

Share this post


Link to post
Share on other sites
Thanks Jason.

I also got around it using this method as well
 
XMMATRIX translationMatrix=XMMatrixTranslation(128.0f,(float)height-128.0f,0.0f);
(float)height being the height of the window. Just seemed a little hacky though as all of the documentation I can find says that 0,0 should be top-left of the screen.

At least I have a working solution though. smile.png

But, on the bright side. I came here yesterday with a blank window and now I have a working camera system, so I am glad I persisted with the shader method rather than persisting with dynamic vertex buffers. In the end, I know it was a far better method to go this way smile.png

Thanks for all of the help and guidance along the way too.

No doubt, I'll be posting in a weeks time asking how to do pixel perfect collisions (I am shuddering allready - LOL). Edited by DarkRonin

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627776
    • Total Posts
      2979021
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to get some advice on what everyone thinks of this debugger, I've been getting some strange results from testing my code and I wanted to see if anyone else had an issues.
      For instance, I added three "ClearRenderTargetView" calls and three "Draw full screen quad" calls and my reported fps became a fifth of what it usually was. Thank you.
    • By schneckerstein
      Hello,
      I manged so far to implement NVIDIA's NDF-Filtering at a basic level (the paper can be found here). Here is my code so far:
      //... // project the half vector on the normal (?) float3 hppWS = halfVector / dot(halfVector, geometricNormal) float2 hpp = float2(dot(hppWS, wTangent), dot(hppWS, wBitangent)); // compute the pixel footprint float2x2 dhduv = float2x2(ddx(hpp), ddy(hpp)); // compute the rectangular area of the pixel footprint float2 rectFp = min((abs(dhduv[0]) + abs(dhduv[1])) * 0.5, 0.3); // map the area to ggx roughness float2 covMx = rectFp * rectFp * 2; roughness = sqrt(roughness * roughness + covMx); //... Now I want combine this with LEAN mapping as state in Chapter 5.5 of the NDF paper.
      But I struggle to understand what theses sections actually means in Code: 
      I suppose the first-order moments are the B coefficent of the LEAN map, however things like
      float3 hppWS = halfVector / dot(halfVector, float3(lean_B, 0)); doesn't bring up anything usefull.
      Next theres:
      This simply means:
      // M and B are the coefficents from the LEAN map float2x2 sigma_mat = float2x2( M.x - B.x * B.x, M.z - B.x * B.y, M.z - B.x * B.y, M.y - B.y * B.y); does it?
      Finally:
      This is the part confuses me the most: how am I suppose to convolute two matrices? I know the concept of convolution in terms of functions, not matrices. Should I multiple them? That didn't make any usefully output.
      I hope someone can help with this maybe too specific question, I'm really despaired to make this work and i've spend too many hours of trial & error...
      Cheers,
      Julian
    • By Baemz
      Hello,
      I've been working on some culling-techniques for a project. We've built our own engine so pretty much everything is built from scratch. I've set up a frustum with the following code, assuming that the FOV is 90 degrees.
      float angle = CU::ToRadians(45.f); Plane<float> nearPlane(Vector3<float>(0, 0, aNear), Vector3<float>(0, 0, -1)); Plane<float> farPlane(Vector3<float>(0, 0, aFar), Vector3<float>(0, 0, 1)); Plane<float> right(Vector3<float>(0, 0, 0), Vector3<float>(angle, 0, -angle)); Plane<float> left(Vector3<float>(0, 0, 0), Vector3<float>(-angle, 0, -angle)); Plane<float> up(Vector3<float>(0, 0, 0), Vector3<float>(0, angle, -angle)); Plane<float> down(Vector3<float>(0, 0, 0), Vector3<float>(0, -angle, -angle)); myVolume.AddPlane(nearPlane); myVolume.AddPlane(farPlane); myVolume.AddPlane(right); myVolume.AddPlane(left); myVolume.AddPlane(up); myVolume.AddPlane(down); When checking the intersections I am using a BoundingSphere of my models, which is calculated by taking the average position of all vertices and then choosing the furthest distance to a vertex for radius. The actual intersection test looks like this, where the "myFrustum90" is the actual frustum described above.
      The orientationInverse is the viewMatrix in this case.
      bool CFrustum::Intersects(const SFrustumCollider& aCollider) { CU::Vector4<float> position = CU::Vector4<float>(aCollider.myCenter.x, aCollider.myCenter.y, aCollider.myCenter.z, 1.f) * myOrientationInverse; return myFrustum90.Inside({ position.x, position.y, position.z }, aCollider.myRadius); } The Inside() function looks like this.
      template <typename T> bool PlaneVolume<T>::Inside(Vector3<T> aPosition, T aRadius) const { for (unsigned short i = 0; i < myPlaneList.size(); ++i) { if (myPlaneList[i].ClassifySpherePlane(aPosition, aRadius) > 0) { return false; } } return true; } And this is the ClassifySpherePlane() function. (The plane is defined as a Vector4 called myABCD, where ABC is the normal)
      template <typename T> inline int Plane<T>::ClassifySpherePlane(Vector3<T> aSpherePosition, float aSphereRadius) const { float distance = (aSpherePosition.Dot(myNormal)) - myABCD.w; // completely on the front side if (distance >= aSphereRadius) { return 1; } // completely on the backside (aka "inside") if (distance <= -aSphereRadius) { return -1; } //sphere intersects the plane return 0; }  
      Please bare in mind that this code is not optimized nor well-written by any means. I am just looking to get it working.
      The result of this culling is that the models seem to be culled a bit "too early", so that the culling is visible and the models pops away.
      How do I get the culling to work properly?
      I have tried different techniques but haven't gotten any of them to work.
      If you need more code or explanations feel free to ask for it.

      Thanks.
       
    • By evelyn4you
      hi,
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      or
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      updateSubresource(buffer_A)
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      updateSubresource(buffer_B)
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
       
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
       
       
       
       
       
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
       
  • Popular Now