Jump to content

  • Log In with Google      Sign In   
  • Create Account

sebjf

Member Since 22 Nov 2010
Offline Last Active May 13 2012 01:31 PM

Topics I've Started

How would I retrieve the 'inner surface' of an arbitrary mesh?

23 April 2012 - 05:25 AM

In my project I am working on a 'subset' of cloth simulation in which I attempt to fit one mesh over another. My solution involves deforming a 'physical mesh' based on depth fields and using it to control the deformation of the complex, detailed mesh.

I have seen impressive mesh optimization methods, but I don't want to optimize the mesh so much as extract part of it. What I want is a way to approximate the 'inside surface' of a mesh, since in the 'real world' this is what would interact physically with mesh being deformed with.

Take the images below; the second mesh contains no overlapping polygons - the lapels, shoulder straps and buttons are gone - it is a single surface consisting of the points closest to the character.

Attached File  jck.jpg   263.73KB   35 downloads

(Checking for and removing overlapping polygons would be one way I suppose, but how to decide which are the 'outer' and which are the 'inner' bearing in mind the normals of the semantically inside polys won't necessarily emanate from the geometric centre of the mesh)

Does anyone know of an existing implementation that does something like this?

Could someone explain how this equation to calculate weights for control points works?

20 April 2012 - 10:05 AM

In my project, I want to deform a complex mesh based on a much simpler proxy mesh. For this, I need to skin my complex mesh so that each vertex is affected by one or more control points on the proxy mesh and will transform linearly with them.

This paper - http://ivizlab.sfu.c...PEG-4 Faces.pdf - Feature Point Based Deformation for MPEG-4 Facial Animation, describes on pages 4 and 5 how to do what I want, I believe.

If I am understanding it right, that algorithm finds the closest control point for a vertex, then the two that flank that vertex. The weight for each control point (Feature Point in the paper) is proportional to the distance to each of these points, relative to the others.
Therefore, the weights sum should be 1 and the vertex will move with the plane defined by the control points.

There are a couple of things I do not understand though:

1. In equation (2), what are d12 and d13, these are not defined in figure (1). Are they equivalent to d2 and d3? Or d1 - d2, d1 - d3?

2. When you have the inverted proportional distance, what is the purpose of taking the Sine of it? (Equation (4))

Finally, in equation (5) on page (6), why is the deformation of the vertex calculated in that way? Why is the displacement not simply:

SUM( controlpoint_0_displacement * controlpoint_0_weight, ..., controlpoint_n_displacement * controlpoint_n_weight )


Could anyone who knows whats going on explain? Thanks!
SJ

Producing a depth field (or volume) for cloth simulation?

11 April 2012 - 06:08 AM

In my project, I am working on a system which will deform a mesh so that it fits over an arbitrary convex mesh - specifically - fitting clothing items over characters.

To start with I used the depth/stencil contents to filter pixels where an intersection took place (since the the scope is narrowed down to clothing this is simplified because the 'item' mesh will completely occlude the 'hull' mesh), then iterated over the positions in the 'item' mesh and deformed each vertex so that it was positioned between the camera and the world position as retrieved from the depth buffer.
When it worked this was very effective, and even with deformations on the CPU was almost real-time, but it did not allow for deforming the mesh in a natural way which preserved its features.

My preferred idea was to filter the depth field to create a set of 'magnetic deformers' which could then be applied to the mesh (per vertex with weight based on euclidean distance*); deforming the mesh on the GPU (OpenCL) I think would allow me to have a reasonable amount of deformers.

The reason I liked the depth buffer for this is that the hull was allowed arbitrary complexity with (practically) no impact on performance, it also allowed me to use objects whose shaders did 'anything' with my system**; but I have spent days trying to cajole it into doing what I want, but am realising that I will probably spend more time (programmer + processing in the final system) trying to create a field in a suitable space rather than creating one for this application.


Cloth simulation systems seem a good resource for basing the collision detection on, since at this point their purpose is identical and they need to be fast, but everything I read seems to focus on realistic real-time simulation of the cloth, I am only interested in the collision detection part.

Does anyone know of a good (fast) cloth simulation system that doesn't use 'geometric primitives' for its collision detection? I have read that some cloth simulation systems use depth fields and this seems like it would produce the best results when deforming against a mesh such as a character.

What about volumes such as voxel volumes? This I think would be ideal if it could be done quickly but I have not read much about creating volumes from from polygonal meshes, and nothing about the performance of testing points against these volumes.


*The best implementation from my POV would allow this to be done in real-time; since this is about fitting rather than cloth simulation I think I could get satisfactory performance by operating without a fully constrained cloth sim - its more important the features of the mesh are preserved.

**This is done with user generated content in mind so if it wasn't hard enough, no significant assumptions about the mesh can be made (if they could, this wouldn't be needed!)

What could be the cause of this specular highlight artifact?

26 March 2012 - 08:47 AM

Hello,

I have a problem with specular highlights in my shader. It manifests itself as a 'rogue highlight' emanating from the point the light vector 'intersects' with the surface, on the opposite side from that which should show the highlight.

Here is an image from it, showing the underside of a box with a directional light pointing down onto the box. (Diffuse set to 0 so only the specular component is showing)

Attached File  Untitled.png   24.67KB   39 downloads

The excerpts from the shader which are relevant to the specular component are included below; I just cannot see what is going wrong.

This is a possible duplicate [http://www.gamedev.net/topic/459040-hlsl-cleaning-out-specular-artifacts-in-normal-maps/] but as I cannot see the pictures I can't tell!

Has anyone seen this before!?

Thanks

(If the code looks a little.. 'stilted' its because its being psuedo-procedurally generated)


-- Vertex Excerpts --
float4x4 globalTransform; //worldviewprojection
float4x4 worldTransform; //world

struct INPUT
{
float4 positionIn : POSITION0;
float4 normalIn : NORMAL0;
};

struct OUTPUT
{
float4 positionOut0 : POSITION0;
float3 normalOut : WORLD_NORMAL;
float4 positionOut1 : WORLD_POSITION;
};

void dfp_v_DoWVPmTransforms(in INPUT input, inout OUTPUT output)
{
float4 position = float4( input.positionIn.xyz, 1);
output.positionOut0 = mul( position, globalTransform );
}

void dfp_v_DoWorldTransforms(in INPUT input, inout OUTPUT output)
{
output.normalOut = normalize( mul( input.normalIn.xyz, (float3x3)worldTransform  ));
output.positionOut1 = mul( float4(input.positionIn.xyz,1), worldTransform);
output.positionOut1 = output.positionOut1 / output.positionOut1.w;
}

-- Pixel Excerpts --

float4 Ce;
float4 MAc;
float4 diffuseColour;
float4 specularColour;
float specularPower;
float4 eyePosition;

Texture2D lightingInfo;

struct INPUT
{
float4 positionOut0 : POSITION0;
float3 normalOut : WORLD_NORMAL;
float4 positionOut1 : WORLD_POSITION;
};

struct OUTPUT
{
float4 colourOut : COLOR0;
};

struct INTERNAL
{
float4 diffuseSum : DiffuseSum;
float4 specularSum : SpecularSum;
};


[b]float4 CalculateSpecularComponent(float3 surface_normal, float3 lightColour, float3 surface_light_dir, float3 eyeDir)[/b]
[b]{[/b]
[b]float3 H = normalize(-surface_light_dir + eyeDir);[/b]
[b]float intensity = saturate(dot(surface_normal,H));[/b]
[b]float specular = pow(intensity, specularPower);[/b]
[b]return saturate(float4( specularColour * lightColour * specular, 0 ));[/b]
[b]}[/b]

void dfp_p_ComputeDirectionalLights(in INPUT input, inout OUTPUT output, inout INTERNAL internal)
{

float3 worldNormal = input.normalOut;
float3 eyeDir = normalize(eyePosition - input.positionOut1).xyz;

float4 lightInfo = lightingInfo.Load(int3(1,0,0));
for(int i = 0; i < lightInfo.y; i++)
{
float lightoffset = lightInfo.x + (i * 4);

float4 lightcolourpower = lightingInfo.Load(int3(lightoffset,0,0));
float4 lightpositionambienteffect = lightingInfo.Load(int3(lightoffset+1,0,0));
float4 lightdirectcone = lightingInfo.Load(int3(lightoffset+2,0,0)); //the direction of the light will be in xyz

internal.diffuseSum += CalculateDiffuseComponent( worldNormal, lightcolourpower.xyz, lightdirectcone.xyz );
internal.specularSum += CalculateSpecularComponent( worldNormal, lightcolourpower.xyz, lightdirectcone.xyz, eyeDir);
}
}

Why does my Unproject() method insist on Unprojecting towards 0,0,0?

29 January 2012 - 06:32 AM

Hello,

I am trying to implement an Unproject() method:


public static Ray Unproject(Vector2 Cursor, Vector2 Window, Matrix4 View, Matrix4 Projection) //Cursor is relative to window, Window is window dimensions
{
float clipspacex = ((Cursor.x / Window.x) * 2) - 1;
float clipspacey = ((Cursor.y / Window.y) * 2) - 1;

Vector2 ClipSpaceCursor = new Vector2(clipspacex, -clipspacey);

Vector3 v1 = Unproject(ClipSpaceCursor, 0, View, Projection);
Vector3 v2 = Unproject(ClipSpaceCursor, 1, View, Projection);

return new Ray(v1, v2 - v1);
}

public static Vector3 Unproject(Vector2 ClipSpaceCursor, float z, Matrix4 View, Matrix4 Projection)
{
return (View.Inverse() * (Projection.Inverse() * new Vector4(ClipSpaceCursor, z, 1))).xyz; //Math library uses column vectors
}

However it appears to always project towards (0,0,0), as opposed to into the screen.

Attached File  Untitled.png   11.61KB   45 downloads

The point v1 always sits behind the cursor regardless of the position and orientation of the camera, so that at least is being resolved correctly, but I cannot see why v2 is always resolved closer to (0,0,0); its like (0,0,0) is the vanishing point and its trying to project out of it.

Can anyone see what is wrong with it?

PARTNERS