Jump to content

  • Log In with Google      Sign In   
  • Create Account


Koder4Fun

Member Since 21 Jan 2009
Offline Last Active Aug 31 2012 08:18 AM

Topics I've Started

How to find minimum distance between two polygons in 3D space

19 August 2012 - 03:32 AM

After some hours of search and some implementations that doesn't work I hope on help of people that know more about this topic.

I need this on a iterative radiosity renderer, using gathering method, writed in C# using GPU (with XNA), to speed up the rendering.

For each polygon I have a lightmap and a list of lightmaps visible from it. That's work ok.
The idea is that when all lightmaps in "visible list" have an average energy less than a thresold I can skip the light gathering for the entire polygon's lightmap.

In a first time I've considered only the average energy, but I find that accounting for distance (or better the square of the distance, that is the physical decay of light) i can rescale the average energy and check against the threshold to do a better filtering:
public void GatherEnergy(float accuracy, ref Hemicube h, ref SceneRenderer scene)
  {
   bool skip = true;
   for (int i = 0; i < VisibleLightMaps.Length; ++i)
   {
    if ((VisibleLightMaps[i].lightmap.AverageEnergy / VisibleLightMaps[i].distSquared) > energyThreshold)
    {
	 skip = false;
	 break;
    }
   }
   if (skip)
    return;
   if (ForDetailMesh)
    GatherEnergyForDetailMesh(accuracy, ref h, ref scene);
   else
    GatherEnergyForPoly(accuracy, ref h, ref scene);
  
   CalculateMaxAndAverageEnergy();
  }

NOTE: the distance I need is the real minimun distance (or distance squared) between the polygons. The points that define the distance segment can be on a vertex, on edge, or inside the polygon.

I hope to be clear.
Thanks in advance for help.

3D Scale an object in spaces other than local-space

27 March 2012 - 10:14 AM

I've writed code for manipulating and transforming objects with on-scene manipulators like 3dsmax, maya ecc...
The translation and rotation work well in local and world spaces, but I can't figure out the scaling from a world-space manipulator.

The transform of an object is splitted in three fields:
		private Vector3 m_translation;
		private Quaternion m_rotation;
		private Vector3 m_scale;

a final transform matrix is maded this way:
		public Matrix Transform
		{
			get
			{
				return Matrix.CreateScale(m_scale) *
					  Matrix.CreateFromQuaternion(m_rotation) *
					  Matrix.CreateTranslation(m_translation);
			}
		}

Now, when I scale an object with a local-space manipulator the result is correct,
when I scale an object with a world-space manipulator the result is wrong.

I.E.:
- I rotate an object for example by 45° around the vertical axis (Y axis)
- I set the manipulator mode to world-space, so the manipulator axes are now parallel to world-space basis vectors

scaling along X axis I'm aspect that the scaling vector must show some values on X and Z components (due to previous rotation) and the object must "slide" along the world X axis, but the object scale in some strange way.

This can depend to scale-rotaton-translation sequence? i must change sequence?
It's always possible to scale an object with values specified in another than local-space?

Thanks in advance for the help...and sorry for the english Posted Image

A last note: I'm using XNA 4.0, NET 4.0 and C#

Problem getting correct total gathered light of an hemicube with GPU

23 December 2009 - 10:09 PM

I have a problem to get the total light gathered from an hemicube using the GPU (the CPU algorithm work well so i know that the problem is the correct sampling of the source texture into the render target).

I already have readed some documents like "map pixels to texels" of Microsoft and others for post-processing and it's works for textures of the same size of the render target (for example deferred shading, blurring, HDR, etc...).

To gather light my idea is to do a 16 tap sampling forming a 4x4 square of pixel that are summed inside a pixel shader and rendered to a target.

Using hemicube edge size that are power of 4 i can recursively downsample to the 3x1 target (all views are packed inside one texture) but the value is not the same of the CPU version and lightmaps are incorrect.

This is the packed hemicube texture, it's contains left, front, right, top and bottom views:
Free Image Hosting at www.ImageShack.us

This is the code of the vertex and pixel shaders used:
struct VS_OUTPUT
{
   float4 pos       : POSITION0;
   float2 texCoord  : TEXCOORD0;
};

const float2 sourceSize = float2(48, 16);
const float2 viewportSize = float2(48, 16) * 0.25;

VS_OUTPUT vs_main(float4 inPos: POSITION )
{
   VS_OUTPUT o = (VS_OUTPUT) 0;

   inPos.xy = sign(inPos.xy);
   o.pos = float4(inPos.xy, 0.0f, 1.0f);
   //o.pos.xy += float2(-1, 1) / viewportSize;
   //if (o.pos.x == 1.0) o.pos.x -= 0.5 / viewportSize;
   //if (o.pos.y == 1.0) o.pos.x -= 0.5 / viewportSize;

   // get into range [0,1]
   o.texCoord = float2(0.5, -0.5) * inPos.xy + 0.5;
   //o.texCoord += 0.5 / viewportSize;
   o.texCoord *= sourceSize;

   return o;
}

const float2 sampleOffsets[16] =
{
	{0.0, 0.0}, {1.0, 0.0}, {2.0, 0.0}, {3.0, 0.0},
	{0.0, 1.0}, {1.0, 1.0}, {2.0, 1.0}, {3.0, 1.0},
	{0.0, 2.0}, {1.0, 2.0}, {2.0, 2.0}, {3.0, 2.0},
	{0.0, 3.0}, {1.0, 3.0}, {2.0, 3.0}, {3.0, 3.0},
};

float4 ps_main( float2 texCoord  : TEXCOORD0 ) : COLOR
{
	float4 color = 0.0;
	//float2 s = sourceSize / (sourceSize - 1);
	for (int i = 0; i < 16; i++)
	{
		float2 uv = (texCoord + sampleOffsets[i]) / sourceSize;
		color += tex2D(diffuseTex, uv);
	}
	
	return color;
}



as you can see I've made some tests on texel and vertex coordinate adjustement.

This implementation on a test texture show the incorrect sampling below becose the 4x4 dark boxes must rest black:
Image Hosted by ImageShack.us
Note: this image is a 12x4 target (zoomed) sampled from 48x16 texture.

Can anyone help me please.
All replies are wellcome.

Testing segment for full occlusion with a bsp-tree [Solved]

21 January 2009 - 03:11 AM

I have a working portalized solid-bsp compiler (like quake/unreal etc..) and I have implemented standard PVS. Now that my maps are more detailed I have decided to implement a more effective solution for cell-portal decomposition based on this document The problem is filtering of valid/non-valid separators. I need an algo to test if all edges of a separator are fully in solid space. This is my current aproach:
		public bool IsSegmentInSolid(Vector3 p1, Vector3 p2, float epsilon)
		{
			if (m_type == BSPNodeType.SolidLeaf)
				return true;

			if (m_type == BSPNodeType.EmptyLeaf)
				return false;

			float dist1 = m_partition.DotCoordinate(p1);
			float dist2 = m_partition.DotCoordinate(p2);

			if (dist1 >= epsilon && dist2 >= epsilon)
				return m_front.IsSegmentInSolid(p1, p2, epsilon);
			else if (dist1 < -epsilon && dist2 < -epsilon)
				return m_back.IsSegmentInSolid(p1, p2, epsilon);

			if (Math.Abs(dist1) < epsilon && Math.Abs(dist2) < epsilon && Math.Sign(dist1) == Math.Sign(dist2))
				return m_front.IsSegmentInSolid(p1, p2, epsilon) | m_back.IsSegmentInSolid(p1, p2, epsilon);

			Vector3 ip;
			float dot = dist1 / (dist1 - dist2);
			
			if (dot < 0.0f) dot = 0.0f;
			if (dot > 1.0f) dot = 1.0f;

			ip = p1 + dot * (p2 - p1);
			if (dist1 > 0.0f)
				return m_front.IsSegmentInSolid(p1, ip, epsilon) & m_back.IsSegmentInSolid(ip, p2, epsilon);
			else
				return m_back.IsSegmentInSolid(p1, ip, epsilon) & m_front.IsSegmentInSolid(ip, p2, epsilon);
		}



but this code fails. Some non-valid separators are not removed. Can anyone help me, or show me what is wrong in my implementation? Thanks... [Edited by - Koder4Fun on January 21, 2009 11:05:06 AM]

PARTNERS