Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Jul 1999
Offline Last Active Jul 28 2014 06:43 PM

Topics I've Started

Frustum Planes, have to transpose view matrix first?

15 June 2014 - 09:44 AM

I've been implementing frustum culling.  I'm going with the method of extracting the 6 frustum planes in world space.  I am using row-major matrices.


I'm doing the following to extract planes:

M = ViewMatrix * ProjectionMatrix;

// Left Plane
A = M[3]  + M[0]; 
B = M[7]  + M[4];
C = M[11] + M[8];
D = M[15] + M[12];


I then check a world-space point against each plane by testing if the dot product between the two is less than 0.  This all works, but only when I do that Transpose of the ViewMatrix first.  I don't understand why that's necessary, as no-one else seems to do that! (I only discovered that it made things work through trial and error).
My ViewMatrix comes from my camera class which is a pretty straight forward quaternion-based setup.  It's an FPS style camera: I track a Yaw and Pitch angle and a Translation vector.  I construct my ViewMatrix like this:
qYaw.FromAxisAngle(0, 1, 0, YawAngle);
qPitch.FromAxisAngle(1, 0, 0, PitchAngle);
q = qPitch * qYaw; 

ViewMatrix = (ViewMatrix * q.ToMatrix()).Inverse();
I've been reading over quaternion/matrix and frustum-plane descriptions for a few days and I can't figure out why I need to transpose that ViewMatrix before extracting the planes.  Can anyone offer my any insights?


Loading a texture array a layer at a time

29 March 2014 - 04:30 PM

I have a piece of code that loads several textures into the layers of a texture array (which was created with a usage flag of D3D11_USAGE_STAGING).  Right now, I call:


D3D11DeviceContext->Map(Texture2DResource, 0, D3D11_MAP_WRITE, 0, &mappedResource)


Then I do a memcpy to the mappedResource.pData, with an offset based on the layer of the texture I want to load.  Then I Unmap.  (Every layer has the same dimensions and depth of course).


The problem is, I only see the last texture I loaded.  I had thought that since I wasn't using D3D11_MAP_WRITE_DISCARD, it would leave the untouched contents of the texture resource alone.


Is there a way to do what I want?  Or do I have to rework my architecture.  I understand that it would be faster and more efficient to have all my data at one time, Map once, and copy it all... but I'm still wondering if it's possible to load the layers individually.

Texture Streaming synchronization

01 February 2014 - 09:35 PM

I have a separate thread that I use for loading texture data.  It fills out a "PixelData" buffer with pixel data.  With my OpenGL code, I can use "fences" to synchronize the upload to the card.  I call glFenceSync to get a synchronization object, then tell the driver to upload my PixelData to the card via glTexSubImage2D.  This call doesn't block; I can check the state of the synchronization object, and when it's completed I know that the texture is available to me.


How does one do this with DirectX?  Right now, my DX texture updating code just calls ID3D11DeviceContext::Map, then a memcpy, and then an Unmap.  DirectX is threadsafe, though, right?  So I could actually make these three calls from within the thread, and just check with a mutex before using the texture... right?

Identical clipping planes to OpenGL?

18 January 2014 - 02:31 PM

So, I can render with OpenGL or DirectX.  Everything looks as expected with OpenGL, but when I render with DirectX it seems like my near clipping plane isn't as close as it is with OpenGL.  This is especially problematic when I try to render the cascades in my shadow mapping implementation; when I tightly bound the frustums to the scene, I am clipping too much of the scene with the near plane.


I figure this has to do with the difference in GL/DX clipspace (-1 to 1 for GL and 0 to 1 for DX), or with the difference in half-pixel offsets between the APIs.  Does anyone have any experience with the difference between the APIs in terms of the clipping of the view frustum?  I'm unsure how to create the appropriate projection and model matrices to get exactly the same clipping in OpenGL or DirectX.

Render to Texture Array, seeing nothing

12 January 2014 - 02:43 PM

As per the title, I am having trouble rendering to a texture array.  I have successfully rendered to one using MRT.  I called CreateTexture2D with a D3D11_TEXTURE2D_DESC that specified an ArraySize of 4, called CreateShaderResourceView with a D3D11_SHADER_RESOURCE_VIEW_DESC that had 4 for its Texture2DArray.ArraySize parameter, and made a call to CreateRenderTargetView for each render target (i created an array of them) with a D3D11_RENDER_TARGET_VIEW_DESC specifying 1 for its Texture2DArray.ArraySize and an index offset for its Texture2DArray.FirstArraySlice.  In the pixel shader I then wrote to SV_TARGET0, SV_TARGET1 etc.  When I call OMSetRenderTargets, I set it to render 4 render targets and pass my array.


That works fine.


Now I want to render to a texture array, but use the geometry shader to specify which slice to render to via SV_RenderTargetArrayIndex.  So I set up the same as above; 4 layers specified to CreateTexture2D, 4 layers specified to CreateShaderResourceView.  But this time I set up the D3D11_RENDER_TARGET_VIEW_DESC to have 4 layers in its Texture2DArray.ArraySize, and called CreateRenderTargetView only once.  I draw 4 instanced copies, and my geometry shader sets SV_RenderTargetArrayIndex and the associated pixel shader outputs to SV_TARGET. I call OMSetRenderTargets for 1 render target,  I see nothing.


If I change the D3D11_RENDER_TARGET_VIEW_DESC to have 1 layer, I see all my instanced renderings output to the first slice of the texture.  But setting it to 4 means I see nothing.  The device clears each layer of the array properly, but I can't seem to render to them.


What could I be missing?