Jump to content
  • Advertisement
Sign in to follow this  
Chetanhl

[Solved] Stretching bug in Parallax Occlusion Mapping

This topic is 1516 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I am getting this weird stretching in parallax occlusion mapping ( cube's top and right face ) - 

 

[attachment=24297:Obj_2_EventID_547_Report20141021-0301(PID 7624).png]

 

[attachment=24298:Obj_2_EventID_1100_Report20141021-0301(PID 6884).png]

 

 

This bug seems to appear only on cube and shader works fine on plane object (scene exported from 3ds max as FBX using y as up axis) . I developed the shader few months back (based on this) and I used only planes to test the shader and didn't noticed it even once. From past two days I have been working on integrating this and some other stuff to my deferred rendering path when I noticed this bug.

 

I have checked almost everything I can imagine world matrices, TBN vectors, viewray, etc but everything seems to be fine. I thought problem was in my shader logic so I reimplemented the shader from scratch 2-3 times ( based on POM example from DXSDK, nvidia sdk, etc ) but no luck.

 

Here's the shader I am using to debug the issue right now - 

//pom.hlsl
#ifdef CH_PIXEL_SHADER
cbuffer PerObjPOM	: register (b4)
#else
cbuffer PerObjPOM
#endif

{
	int 	g_bDisablePOM;
	float   g_fHeightMapScale;
	int		g_nMinSamples;
	int		g_nMaxSamples;
	float2 	tileFactor;
	float 	glodDist;
	
	float	PerObjPOM_pad0;
};


float2 ParallaxOffset( float3 vecViewW, float3 normalW, float2 texCoords, float3x3 matWToTangent )
{
	 //if( g_bDisablePOM < 1 )
	//	 return texCoords;

	float3 viewTS = mul(vecViewW, matWToTangent) ;
	//float3 normalTS = normalize( mul( normalW, matWToTangent ) );
	
	//float fMaxOffset = (length( viewTS.xy ) / viewTS.z);
	//fMaxOffset *= g_fHeightMapScale;
	
	float2 pOffset = -viewTS.xy * g_fHeightMapScale / viewTS.z ;//normalize( -viewTS.xy );
	//pOffset *= fMaxOffset;
	
	int numSamples = (int)lerp( g_nMaxSamples, g_nMinSamples, dot( -vecViewW, normalW ) );
	
	float  stepZ = 1.0f / (float)numSamples ;
	float  iZ = 1.0f - stepZ;
	float  lastZ = 1.0f;
	float2 texStep = stepZ * pOffset;
	float2 currTexOffset = (float2)0;
	float2 prevTexOffset = (float2)0;			
	float currHt = 0.0f;
	float prevHt = 0.0f;
	uint iSamples = 0;
	
	float2 dx = ddx(texCoords.x);
	float2 dy = ddy(texCoords.y);
	
	float2 pt1 = float2( 0,0 );
	float2 pt2 = float2( 0,0 );
	
	float2 resOffset = (float2)0;
	
	int endLoop = numSamples +1;
	
	while( iSamples < endLoop )
	{
		currHt = gTexNormalMap.SampleGrad( gSamNormalMap,texCoords + currTexOffset, dx, dy).a;
		
		if( currHt > iZ )
		{			
			float tHeight = ( prevHt - lastZ ) / ( prevHt - currHt + iZ -lastZ );
			
			resOffset = prevTexOffset + tHeight * texStep;
			
			iSamples = numSamples + 1;
		}
		else
		{
			iSamples ++;
			
			prevTexOffset = currTexOffset;
			currTexOffset += texStep;
			
			lastZ = iZ;
			iZ -= stepZ;
			
			prevHt = currHt;
		}
	}
	
	float2 res = texCoords + resOffset;
	
	//clip( res.x );
	//clip( res.y );
	//clip( res.x > tileFactor.x ? -1:1 );
	//clip( res.y > tileFactor.y ? -1:1 );
	
	return res;
}

float3 GetWorldNormalFromSample( float3 normalSample,float3x3 matTToWorld )
{
	float3 resNormal = 2.0f*normalSample - 1.0f;
	resNormal = mul(resNormal,matTToWorld);
	resNormal = normalize(resNormal);
	return resNormal;
}

PixelData GetPixelData(VS_OUTPUTNormMapping input)
{
	PixelData res;
	
	res.posW = input.posW;

	float3 toEye = normalize( pEye - input.posW ) ;
	
	float3 N = normalize(input.normalW);
	float3 T = input.tangentW - dot(N,input.tangentW)*N;
	//float3 B = input.binormalW - dot(N,input.binormalW)*N - dot(T,input.binormalW)*T;
	
	T = normalize(T);
	float3 B = normalize(cross(N,T));
	
	float3x3 tbn = float3x3(T,B,N);
	float3x3 tbnInv = transpose(tbn);
	
	float2 parrallaxTexCoord = ParallaxOffset(-toEye,N,
												input.tex0,tbnInv) ;
		
	float3 vecNormal = gTexNormalMap.Sample( gSamNormalMap, parrallaxTexCoord).rgb;
	vecNormal.y = 1.0f - vecNormal.y;
	res.normalW = GetWorldNormalFromSample( vecNormal, tbn );
	
	res.texCoord0 = parrallaxTexCoord;
						   
	return res;
}



-------------------------------------------------------------------------
//pixel shader.hlsl
#define PS_INPUT_GENERIC VS_OUTPUTNormMapping
float4 mainPS(PS_INPUT_GENERIC input) : SV_Target 
{
PixelData pData = GetPixelData(input);
........
}

----------------------------------------------------------------------------
//vertexshaderhlsl
VS_OUTPUTNormMapping mainVS(VS_INPUTNormMapping input)
{
VS_OUTPUTNormMapping output = (VS_OUTPUTNormMapping)0;

matrix mat = mul(worldMat,viewProj);

output.posH = mul(float4(input.posL,1.0f),mat);
output.tex0 = input.tex0;
output.posW = mul(float4(input.posL,1.0f),worldMat).xyz;
output.normalW = mul(float4(input.normalL,0.0f),worldMat).xyz;
output.tangentW = mul(float4(input.tangentL,0.0f),worldMat).xyz;
output.binormalW = mul(float4(input.binormalL,0.0f),worldMat).xyz;


return output;
}
 
 

This version is almost exact replica of Frank Luna's example, I was hoping this would work so I can compare with my orignal implementation and find the issue but bug showed up in this version too, so i guess its something else ??

 

I am compiling shaders with D3DCOMPILE_PACK_MATRIX_ROW_MAJOR option in D3D11. For maths I am using SimpleMath library from DirectTK. Everything is being done according to left hand coordinate system.

 

Here's a set of values from a debugging session if it helps - 

toEye = x = 0.263482400, y = 0.126422400, z = -0.956344400
vecViewW = x = -0.263482400, y = -0.126422400, z = 0.956344400
viewTS = x = -0.263482400, y = -0.956344400, z = -0.126422200
pOffset = x = -0.208414600, y = -0.756468500
texStep = x = -0.002368347, y = -0.008596233
resOffset = x = -0.017735520, y = -0.064373430
texCoords = x = 0.398192400, y = 0.742013800
parrallaxTexCoord = x = 0.380456900, y = 0.677640300

Also one last thing, incase it helps -In Pixel History Window ( Visual Graphics Debugger ) -the vertex shader isn't highlighted in blue color as pixel shader and it doesn't show proper vertex data either. But vertex shader seemed fine while debugging using RenderDoc. 

 

[attachment=24299:Untitled.png]

 

 

Share this post


Link to post
Share on other sites
Advertisement

It is hard to say what the issue is, but I would guess that either your input data incorrect (i.e. your normal space is not correctly defined at each vertex) or perhaps your matrices are not being passed correctly.  I recall seeing this type of error quite a few times during my own work with POM, so it isn't unusual.  My bet is that the matrices are incorrect, but of course that is just a guess...

Share this post


Link to post
Share on other sites

It is hard to say what the issue is, but I would guess that either your input data incorrect (i.e. your normal space is not correctly defined at each vertex) or perhaps your matrices are not being passed correctly.  I recall seeing this type of error quite a few times during my own work with POM, so it isn't unusual.  My bet is that the matrices are incorrect, but of course that is just a guess...

 

I checked everything again and this time using a cube generated through code to rule out possibility of any funky stuff going while importing stuff through assimp library.

And I found the root cause, its due to different scaling along u and v either in object space or texture space. 

 

So this bug happens in 2 cases - 

 

1) If we have a quad and our transformation matrix has scaling of x = 2 and y = 3 this stretching will come up.

2) If you have different distance b/w the vertices (b-a) is not euqal to (c-a) stretching will come up again while rendering triangle abc.

 

And it sorts of make sense because when we were transforming viewRay in tangent space we didn't take into account the scaling, because we are forming TBN using normalized vectors ( that's how all the samples worked too ).

 

Solution for case 1 easy - either apply inverse scaling to viewRay or multiple by Vertor3D(1/scaleX,1/scaleY,1/scaleZ).

 

Problem is in the 2nd case - I can't figure out a reliable way to detect this in shaders. 

If we have 3 vertices  - 

 

       ( pos ) ( texcoord )

A = (0,0,0) (0,0)

B = (2,0,0) (1,0)

C = (0,0,-1) ( 0,1)

 

Solution is to multiply viewRay with a factor of float3( 0.5f,1.0f,1.0f ) in this case. But problem is how to detect that factor for each case in pixel shader ??

It feels like its a limitation of ray marching ? or maybe I am missing something ?

 

@JasonZ

Have you encountered this case in your implementation ?

Share this post


Link to post
Share on other sites

For case #2, I think that is an authoring problem.  If you assign texture coordinates in such a way that the texture itself is stretched more in one direction, then that is something you need to correct when you create your geometry - it isn't a problem to be solved in your shader!

 

I think the non-uniform scaling is also a very special case type of problem.  Do you really need non-uniform scaling on a cube?  I would say you should simply apply the texture coordinates to the geometry with the appropriate scaling that you want, and then just use uniform scaling and forget about it the issue!

Share this post


Link to post
Share on other sites

 

For case #2, I think that is an authoring problem.  If you assign texture coordinates in such a way that the texture itself is stretched more in one direction, then that is something you need to correct when you create your geometry - it isn't a problem to be solved in your shader!

 

I think the non-uniform scaling is also a very special case type of problem.  Do you really need non-uniform scaling on a cube?  I would say you should simply apply the texture coordinates to the geometry with the appropriate scaling that you want, and then just use uniform scaling and forget about it the issue!

 

Many test scenes had these non-uniform scaling (its so easy to create non uniform scaled mesh even when creating a simple cube in 3ds max etc) .I wasn't not really sure whether I should worry about this scaling stuff or not but after reading your post yes i think models have to be fixed in these cases, just realized I haven't really seen any stretched texture in any game because it doesnt really make sense to stretch it like that and tiling solves the issue automatically.
 

Thanks 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!