Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 05 Oct 2004
Offline Last Active Yesterday, 09:05 AM

Topics I've Started

[Fixed... kind of] MonoGame effects with VS2012 & Windows 8

12 November 2013 - 02:04 PM

Hi there,
I'm having a bit of trouble setting up a content project (for my desktop application) using  Windows 8 and visual studio 2012, was wondering if anyone here could point me in the right direction.
MonoGame 3.0.1 is installed, along with the Windows Phone 8 SDK. I've added a content project and what is essentially a content builder to my MonoGame solution, but the only viable option that I could find for the project type is a Windows Phone Game.
This works without any issues when building .xnb textures, but I can't build any of my effect files as Windows Phone projects don't support custom effects.
Is there any way I can get around this, or am I going to have to roll back to Windows 7?
Thanks for your help!

SSAO issues [Fixed]

08 August 2013 - 03:23 AM

Hey folks,


I'm working through an excellent article on SSAO (http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-simple-and-practical-approach-to-ssao-r2753) but am having troubles generating sensible data. Although the scene looks reasonably well lit using the SSAO technique, the map isn't exactly what I'm expecting.


I'm have a deferred render, so I have to reconstruct both the normals & positions in view space, which I do like this :

// Given the supplied texture coordinates, this function will use the depth map and inverse view projection
// matrix to construct a view-space position
float4 CalculateViewSpacePosition( float2 someTextureCoords )
 // Get the depth value
 float currentDepth = DepthMap.Sample( DepthSampler, someTextureCoords ).r;
 // Calculate the screen-space position
 float4 currentPosition;
 currentPosition.x = someTextureCoords.x * 2.0f - 1.0f;
 currentPosition.y = -(someTextureCoords.y * 2.0f - 1.0f);
 currentPosition.z = currentDepth;
 currentPosition.w = 1.0f;
 // Transform the screen space position in to view-space 
 currentPosition = mul( currentPosition, ourInverseProjection );

 return currentPosition;

// Reads in normal data using the supplied texture coordinates, converts in to view-space
float3 GetCurrentNormal( float2 someTextureCoords )
 // Get the normal data for the current point
 float3 normalData = NormalMap.Sample(NormalSampler, someTextureCoords).xyz;
 // Transform the normal in to view space
 normalData = normalData * 2.0f - 1.0f;
 normalData = mul( normalData, ourInverseProjection );
 return normalize(normalData);

So the scene that I'm rendering (without any lights) looks like this. The individual cube faces seem to have different colour values, where as I would expect the difference to be where the cubes have a common edge:




But the SSAO buffer isn't exactly what I would expect (I would think there would be black lines running down the crevasses ) :




My guess is that the way I'm calculating my view-space normals must be messed up - but any suggestions from the audience would be greatly appreciated smile.png


This is what I'm outputting from the SSAO shader :

// ambient occlusion calculations

ambientFactor /= 16.0f; // iterations * 4 (for each ambient factor calculated)
return float4( 1.0f - ambientFactor, 1.0f - ambientFactor, 1.0f - ambientFactor, 1.0f );


Deferred Shadow Maps [Fixed]

23 July 2013 - 03:19 PM

Hi folks,


I'm running in to trouble trying to get my deferred light maps working for directional lights. 


In my light map pass I render all of the geometry in the scene using a world/view/projection matrix based on the position and direction of the light source. The result I'm getting looks like what I'd expect it to be.


Trouble comes in (I'm assuming) the pixel shader when I'm try to convert the current world position in to light-screen-space. I'd really appreciate it if you could run your eyes over the logic...


So the setup for the light matrix that I pass in to the pixel shader's constant buffer :

// Calculate the world/view/projection matrix for the light
XMMATRIX world	= XMLoadFloat4x4( &aCamera->GetWorld() );
XMMATRIX view	= XMMatrixLookAtLH( XMLoadFloat3(&aLight->GetPosition()), XMLoadFloat3(&lightDirection), XMLoadFloat3(&XMFLOAT3(0.0f, 1.0f, 0.0f)) );
XMMATRIX projection = XMMatrixPerspectiveFovLH( XMConvertToRadians(30.0f), 1.0f, 1.0f, 1024.0f );
XMMATRIX lightWVP   = XMMatrixMultiply( XMMatrixMultiply(world, view), projection );

// Bind the matrix to the lighting buffer	
lightingBuffer->myLightViewProjection = XMMatrixTranspose( lightWVP );

Then in the shader I calculate the world space position (currentDepth is sampled from the depth g-buffer):

// Calculate the screen-space position
float4 currentPosition;
currentPosition.x = anInput.myTexCoords.x * 2.0f - 1.0f;
currentPosition.y = -(anInput.myTexCoords.y * 2.0f - 1.0f);
currentPosition.z = currentDepth;
currentPosition.w = 1.0f;

// Transform the screen space position in to world-space
currentPosition = mul( currentPosition, myInverseViewProjection );
currentPosition /= currentPosition.w;

And then I convert the current position in to light-space :

// multiply current position by light view projection to get light-screen space
float4 lightScreenPosition = mul( currentPosition, myLightViewProjection );

// convert position in to texture coordinates
float2 lightTextureCoords = 0.5f * (float2(lightScreenPosition.x, -lightScreenPosition.y) + 1);

// sample from light depth map
float lightDepth = ShadowMap.Sample( ShadowSampler, lightTextureCoords ).r;

And I guess if the lightDepth > currentDepth then I should be returning my ambient light for the current pixel?

if( lightDepth > currentDepth )
 return float4( 0.1f, 0.1f, 0.1f, 0.1f );

// otherwise continue on with the light map calculations

I'm obviously messing something up here, any ideas what?


Thanks for your help!

Constant buffer madness! [fixed]

15 July 2013 - 04:53 AM

Hey people,


I'm getting some strange errors setting up a constant buffer to send in to my pixel shader (used for rendering directional lights in a deferred renderer).


So this is the buffer, C++ followed by HLSL :

struct LightingConstants
  DirectX::XMFLOAT3	myLightDirection;
  DirectX::XMFLOAT3	myLightColour;
  DirectX::XMFLOAT3	myCameraPosition;
  DirectX::XMFLOAT2	myHalfPixel;
  DirectX::XMMATRIX	myInverseViewProjection;
  float			myPadding;


cbuffer LightingBuffer
  float3	 myLightDirection;
  float3	 myLightColour;
  float3	 myCameraPosition;
  float2	 myHalfPixel;
  float4x4       myInverseViewProjection;
  float	         myPadding;

Which should be 112 bytes.


The trouble is when it gets to the pixel shader, some of the values I'm getting are messed up. Setting a light colour of (1.0f, 1.0f, 1.0f) will result in (1.0f, 1.0f, 0.0f) when trying to read from the the buffer, and setting the padding to 0.0f results in a reading of 10.0f (looking at the pix capture).


The really odd thing is that when I calculate the size of the buffer code-side (sizeof(LightingConstants)) I get a value of 128, not 112.


Stranger still, when I enter the buffer layout in to PIX and take a look at the light colour value - it looks correct, but reading it still gives me (1.0f, 1.0f, 0.0f).


Any ideas what I'm doing wrong here? This is how I'm setting up the constant buffer for the pixel shader (I do use a constant buffer for the vertex shader as well, but didn't think that the two would interfere with each other) :


Building the buffer :

HRESULT             result;
D3D11_BUFFER_DESC   bufferDescription;

VEDirectXInterface* renderInterface = VoxelEngine::GetInstance()->GetRenderInterface();
assert( renderInterface != NULL );

// Set up the buffer with a byte width of three matrices
bufferDescription.BindFlags             = D3D11_BIND_CONSTANT_BUFFER;
bufferDescription.ByteWidth             = sizeof(LightingConstants);
bufferDescription.CPUAccessFlags        = D3D11_CPU_ACCESS_WRITE;
bufferDescription.MiscFlags             = 0;
bufferDescription.StructureByteStride   = 0;
bufferDescription.Usage                 = D3D11_USAGE_DYNAMIC;

// Create the buffer
result = renderInterface->GetDevice()->CreateBuffer(

if( FAILED(result) )
  return false;

return true;

Populating the buffer :

// Grab the lighting constant buffer
result = deviceContext->Map( myPixelConstantBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource );
if( FAILED(result) )
  return false;

// Fill in the lighting buffer
lightingBuffer				= (LightingConstants*)mappedResource.pData;
lightingBuffer->myCameraPosition	= aCamera->GetPosition();
lightingBuffer->myHalfPixel		= renderManager->GetHalfPixel();
lightingBuffer->myLightColour		= directionalLight->GetColour();
lightingBuffer->myLightDirection	= directionalLight->GetDirection();
lightingBuffer->myPadding		= 0.0f;

// Calculate the inverse view-projection matrix
XMMATRIX viewProj = XMMatrixMultiply( XMLoadFloat4x4(&aCamera->GetView()), XMLoadFloat4x4(&aCamera->GetProjection()) );
XMMATRIX inverseViewProj = XMMatrixInverse( NULL, viewProj );
lightingBuffer->myInverseViewProjection = XMMatrixTranspose(inverseViewProj);

// Release the buffer
deviceContext->Unmap( myPixelConstantBuffer, 0 );

// Set the constant buffer for the pixel shader
deviceContext->PSSetConstantBuffers( 0, 1, &myPixelConstantBuffer );

Thanks for the help!

Shader sampling from wrong texture? [fixed]

12 July 2013 - 08:58 AM

Hi there!


I'm working on a deferred rendering system for a game I'm building, and am having trouble with the final lighting shader. I have three render targets that I'm passing in to the shader, but it only seems to be sampling from the first.


So my lighting shader is set up like this :

Texture2D ColourMap;
SamplerState ColourSampler
	Texture 	= (ColourMap);
	AddressU 	= CLAMP;
	AddressV 	= CLAMP;
	MagFilter 	= LINEAR;
	MinFilter 	= LINEAR;
	MipFilter 	= LINEAR;

Texture2D NormalMap;
SamplerState NormalSampler
	Texture 	= (NormalMap);
	AddressU	= CLAMP;
	AddressV	= CLAMP;
	MagFilter	= POINT;
	MinFilter	= POINT;
	MipFilter	= POINT;

Texture2D DepthMap;
SamplerState DepthSampler
	Texture 	= (DepthMap);
	AddressU	= CLAMP;
	AddressV	= CLAMP;
	MagFilter	= POINT;
	MinFilter	= POINT;
	MipFilter	= POINT;


float4 PS( PixelShaderInput anInput ) : SV_TARGET
 // lighting bits and pieces

And in my pixel shader, returning a sample from the colour target :

return ColourMap.Sample( ColourSampler, anInput.myTexCoords );

Produces the same out put as :

return NormalMap.Sample( NormalSampler, anInput.myTexCoords );

I've been saving the colour and normal targets to screenshots (.dds format) and can confirm that they're different... in fact now I'm just filling my normal render target up with (1.0f, 0.0f, 0.0f) to be safe.


This is how I'm setting my sampler states :

ID3D11ShaderResourceView* shaderResources[] = { myColourTarget->myShaderResource, myNormalTarget->myShaderResource, myDepthTarget->myShaderResource };

shader->SetShaderResources( shaderResources, 3 )


SetShaderResources( ID3D11ShaderResourceView** someShaderResources, int aShaderResourceCount )
 myDeviceContext->PSSetSHaderResources( 0, aShaderResourceCount, someShaderResources );

Anything obvious that I'm not spotting here?


Thanks for your help!